Best way to handle different views of the same page?
-
Say I have a page: mydomain.com/page
But I also have different views:
/?sort=alpha
/print-version
/?session_ID=2892
etc. All same content, more or less.
Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both?
Thanks!
-
I generally trust Duane, so I'd take it at some value - I just haven't seen that problem pop up much, practically. Theoretically, you'd create a loop - so, if it leaked, it would keep looping/leaking until no juice was left. That seems like an odd way to handle the issue.
My bigger concern would be the idea that, if you rel-canonical every page, Bing might not take your important canonical tags seriously. They've suggested they do this with XML sitemaps, too - if enough of the map is junk, they may ignore the whole thing. Again, I haven't seen any firm evidence of this, but it's worth keeping your eyes open.
-
What do you think about what Duane said, about assigning value to itself, could this be a LJ leak as it would be a leak if it was assigning value to anouther page?
-
I haven't seen evidence they'll lose trust yet, but it's definitely worth noting. Google started out saying that, too, but then eased up, because they realized it was hard enough to implement canonical tags even close to correctly (without adding new restrictions). I agree that, in a perfect world, it shouldn't just be a Band-aid.
-
I am not sure if SEOMoz will, but search engines wont as it wont be in their index.
-
Thanks gentlemen. I will probably just go with the NOINDEX in the robots meta tag and see how that works.
Interesting side note, SEOmoz will still report this as a duplicate page though ;-( Hopefully the search engines won't.
-
Yes i agree for most it is probably not going to be a problem, But Duane again yesterday blogged about this, he did say they can live with it. but they dont like it, and the best thing is to fix it. http://www.bing.com/community/site_blogs/b/webmaster/archive/2011/11/29/nine-things-you-need-to-control.aspx
this leaves me in 2 minds, he said that they may lose trust in all your canonicals if they see it over used, this can be a worry if you have used it for its true use elsewhere.
I also worry about lose of link juice, as Duanes words in the first blog post were, "Please pass any value from itself to itself"
does that mean it loses link juice in the process like a normal canonical does?
I myself would fix it anouther way, but this may be a lot of work and bother for some. Thats why I say its a hard one.
-
I'll 80% agree with Alan, although I've found that, in practice, the self-referencing canonical tag is usually fine. It wasn't the original intent, but at worst the search engines ignore it. For something like a session_ID, it can be pretty effective.
I would generally avoid Robots.txt blocking, as Alan said. If you can do a selective META NOINDEX, that's a safer bet here (for all 3 cases). You're unlikely to have inbound links to these versions of your pages, so you don't have to worry too much about link-juice. I just find that Robots.txt can be unpredictable, and if you block tons of pages, the search engines get crabby.
The other option for session_ID is to capture that ID as a cookie or server session, then 301-redirect to the URL with no session_ID. This one gets tricky fast, though, as it depends a lot on your implementation.
Unless you're seeing serious problems (like a Panda smackdown), I'd strongly suggest tackling one at a time, so that you can measure the changes. Large-scale blocking and indexation changes are always tricky, and it's good to keep a close eye on the data. If you try to remove everything at once, you won't know which changes accomplished what (good or bad). It all comes down to risk/reward. If you aren't having trouble and are being proactive, take it one step at a time. If you're having serious problems, you may have to take the plunge all at once.
-
This is a hard one, cannonical is the easy choice, but Bing advises against it, as you should not have a canonical pointing to itself, it could lead to lose of trust in your website. I would not use the robots for this as you lose your flow of link juice
I would try to no-index follow all pages excpt for the true canonical page using meta tags, this means some sort of server side detection of when to place the tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to test Angular JS heavy page for SEO?
Hi Moz community, Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works. I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed. https://sitebulb.com/resources/guides/javascript-seo-resources/ However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index. Any thoughts on this, is this concern valid? Thanks!
Technical SEO | | znotes0 -
Is it good to redirect million of pages on a single page?
My site has 10 lakh approx. genuine urls. But due to some unidentified bugs site has created irrelevant urls 10 million approx. Since we don’t know the origin of these non-relevant links, we want to redirect or remove all these urls. Please suggest is it good to redirect such a high number urls to home page or to throw 404 for these pages. Or any other suggestions to solve this issue.
Technical SEO | | vivekrathore0 -
When creating parent and child pages should key words be repeated in url and page title?
We are in the direct mail advertising business: PrintLabelAndMail.com Example: Parent:
Technical SEO | | JimDirectMailCoach
Postcard Direct Mail Children:
Postcard Mailings
Postcard Design
Postcard Samples
Postcard Pricing
Postcard Advantages should "postcard" be repeated in the URL and Page Title? and in this example should each of the 5 children link back directly to the parent or would it be better to "daisy chain" them using each as parent for the next?0 -
Banned Page
I have been using a 3rd party checker on indexed pages in google. It has shown several banned pages. I type the page in and it comes up. But it is nowhere to be found for me to delete it. It is not in the wordpress pages. It also shows up in the duplicate content section in my campaigns in moz.com. I can find the page to delete it. If it is banned then I do not want to redirect it to the correct page. Any ideas on how to fix this?
Technical SEO | | Roots70 -
Best Way to Break Down Paginated Content?
(Sorry for my english) I have lots of user reviews on my website and in some cases, there are more than a thousand reviews for a single product/service. I am looking for the best way to break down these reviews in several sub-pages. Here are the options I thought of: 1. Break down reviews into multiple pages / URL http://www.mysite.com/blue-widget-review-page1
Technical SEO | | sbrault74
http://www.mysite.com/blue-widget-review-page2
etc... In this case, each page would be indexed by search engines. Pros: all the reviews are getting indexed Cons: It will be harder to rank for "blue widget review" as their will be many similar pages 2. Break down reviews into multiple pages / URL with noindex + canonical tag http://www.mysite.com/blue-widget-review-page1
http://www.mysite.com/blue-widget-review-page2
etc... In this case, each page would be set to noindex and the canonical tag would point to the first review page. Pros: only one URL can potentially rank for "blue widget review" Cons: Subpages are not indexed 3. Load all the reviews into one page and handle pagination using Javascript reviews, reviews, reviews
more reviews, more reviews, more reviews
etc... Each page would be loaded in a different which would be shown or hidden using Javascript when browsing through the pages. Could that be considered as cloaking?!? Pros: all the reviews are getting indexed Cons: large page size (kb) - maybe too large for search engines? 4. Load only the first page and load sub-pages dynamically using AJAX Display only the first review page on initial load. I would use AJAX to load additional reviews into the . It would be similar to some blog commenting systems where you have to click on "Load more comments" to see all the comments. Pros: Fast initial loading time + faster loading time for subpages = better user experience Cons: Only the first review page is indexed by search engines ========================================================= My main competitor who's achieving great rankings (no black hat of course) is using technique #3. What's your opinion?0 -
How do i best utilize my exact keyword match domain names as landing pages?
I own mutiple exact keyword match domain names which I would like to utilize as landing pages for my business.I would like to know if it would be best to host each page as a seperte website?or would it be best to simply link the domain name thorugh the domain registrar to an already exisiting page on the main website. example: cheapshoes.com --> www.thebestshoes.com/cheapshoes I would like to do this in the most seo friendly manner so that the exact keyword domain is ranked within google without losing any PR. Which method is best? and what type of linking should be used through the domin registrar?
Technical SEO | | snapbacksmith0 -
Has Google stopped rendering author snippets on SERP pages if the author's G+ page is not actively updated?
Working with a site that has multiple authors and author microformat enabled. The image is rendering for some authors on SERP page and not for others. Difference seems to be having an updated G+ page and not having a constantly updating G+ page. any thoughts?
Technical SEO | | irvingw0 -
Can I optimize two different pages with very similar keywords without hurting SEO?
Hi there, I have often heard that you cannot have multiple pages rank for the same keyword. My question here is more about long tail keywords who have the same keyword phrase repeated on different pages. For Example: I have two webpages with different content. I want to have one page (Homepage) rank for the more generic term such as "innovation management" and another supporting page rank for "innovation management software". Will Google see these two different webpages as competing? Should I avoid repeating the more general term in the phrase? Has anyone ever seen your SEO results decline when doing this? I don't believe this is duplicate content since the pages hold completely different copy and assets but I am not sure if the repeating phrase in the title tags will flag anything to the search engines.
Technical SEO | | Scratch_MM0