Best way to handle different views of the same page?
-
Say I have a page: mydomain.com/page
But I also have different views:
/?sort=alpha
/print-version
/?session_ID=2892
etc. All same content, more or less.
Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both?
Thanks!
-
I generally trust Duane, so I'd take it at some value - I just haven't seen that problem pop up much, practically. Theoretically, you'd create a loop - so, if it leaked, it would keep looping/leaking until no juice was left. That seems like an odd way to handle the issue.
My bigger concern would be the idea that, if you rel-canonical every page, Bing might not take your important canonical tags seriously. They've suggested they do this with XML sitemaps, too - if enough of the map is junk, they may ignore the whole thing. Again, I haven't seen any firm evidence of this, but it's worth keeping your eyes open.
-
What do you think about what Duane said, about assigning value to itself, could this be a LJ leak as it would be a leak if it was assigning value to anouther page?
-
I haven't seen evidence they'll lose trust yet, but it's definitely worth noting. Google started out saying that, too, but then eased up, because they realized it was hard enough to implement canonical tags even close to correctly (without adding new restrictions). I agree that, in a perfect world, it shouldn't just be a Band-aid.
-
I am not sure if SEOMoz will, but search engines wont as it wont be in their index.
-
Thanks gentlemen. I will probably just go with the NOINDEX in the robots meta tag and see how that works.
Interesting side note, SEOmoz will still report this as a duplicate page though ;-( Hopefully the search engines won't.
-
Yes i agree for most it is probably not going to be a problem, But Duane again yesterday blogged about this, he did say they can live with it. but they dont like it, and the best thing is to fix it. http://www.bing.com/community/site_blogs/b/webmaster/archive/2011/11/29/nine-things-you-need-to-control.aspx
this leaves me in 2 minds, he said that they may lose trust in all your canonicals if they see it over used, this can be a worry if you have used it for its true use elsewhere.
I also worry about lose of link juice, as Duanes words in the first blog post were, "Please pass any value from itself to itself"
does that mean it loses link juice in the process like a normal canonical does?
I myself would fix it anouther way, but this may be a lot of work and bother for some. Thats why I say its a hard one.
-
I'll 80% agree with Alan, although I've found that, in practice, the self-referencing canonical tag is usually fine. It wasn't the original intent, but at worst the search engines ignore it. For something like a session_ID, it can be pretty effective.
I would generally avoid Robots.txt blocking, as Alan said. If you can do a selective META NOINDEX, that's a safer bet here (for all 3 cases). You're unlikely to have inbound links to these versions of your pages, so you don't have to worry too much about link-juice. I just find that Robots.txt can be unpredictable, and if you block tons of pages, the search engines get crabby.
The other option for session_ID is to capture that ID as a cookie or server session, then 301-redirect to the URL with no session_ID. This one gets tricky fast, though, as it depends a lot on your implementation.
Unless you're seeing serious problems (like a Panda smackdown), I'd strongly suggest tackling one at a time, so that you can measure the changes. Large-scale blocking and indexation changes are always tricky, and it's good to keep a close eye on the data. If you try to remove everything at once, you won't know which changes accomplished what (good or bad). It all comes down to risk/reward. If you aren't having trouble and are being proactive, take it one step at a time. If you're having serious problems, you may have to take the plunge all at once.
-
This is a hard one, cannonical is the easy choice, but Bing advises against it, as you should not have a canonical pointing to itself, it could lead to lose of trust in your website. I would not use the robots for this as you lose your flow of link juice
I would try to no-index follow all pages excpt for the true canonical page using meta tags, this means some sort of server side detection of when to place the tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best redirect destination for 18k highly-linked pages
Technical SEO question regarding redirects; I appreciate any insights on best way to handle. Situation: We're decommissioning several major content sections on a website, comprising ~18k webpages. This is a well established site (10+ years) and many of the pages within these sections have high-quality inbound links from .orgs and .edus. Challenge: We're trying to determine the best place to redirect these 18k pages. For user experience, we believe best option is the homepage, which has a statement about the changes to the site and links to the most important remaining sections of the site. It's also the most important page on site, so the bolster of 301 redirected links doesn't seem bad. However, someone on our team is concerned that that many new redirected pages and links going to our homepage will trigger a negative SEO flag for the homepage, and recommends instead that they all go to our custom 404 page (which also includes links to important remaining sections). What's the right approach here to preserve remaining SEO value of these soon-to-be-redirected pages without triggering Google penalties?
Technical SEO | | davidvogel0 -
Best way to absorb discontinued brand/domain?
Our parent company is looking to absorb the domain of a brand we are discontinuing. The domain we want to absorb has a thousands of blog posts from 2010 onward. Much of the content is old but still high-converting. We would like to keep as much of the potential traffic as possible, but we don't want the parent website to become too large or lose credibility with too many 301 redirects. Any advice on the best way to do this?
Technical SEO | | NichGunn1 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
https v http is there any difference in rankings what is the best for a online chemist store?
Have a client that has a https site do you think its better than http for this kind of site and is there any studies done regarding any difference in rankings?
Technical SEO | | ReSEOlve0 -
Pages extensions
Hi guys, We're in the process of moving one of our sites to a newer version of the CMS. The new version doesn't support page extensions (.aspx) but we'll keep them for all existing pages (about 8,000) to avoid redirects. The technical team is wondering about the new pages - does it make any difference if the new pages are without extensions, except for usability? Thanks!
Technical SEO | | lgrozeva0 -
Best Way To Handle Expired Content
Hi, I have a client's site that posts job openings. There is a main list of available jobs and each job has an individual page linked to from that main list. However, at some point the job is no longer available. Currently, the job page goes away and returns a status 404 after the job is no longer available. The good thing is that the job pages get links coming into the site. The bad thing is that as soon as the job is no longer available, those links point to a 404 page. Ouch. Currently Google Webmaster Tools shows 100+ 404 job URLs that have links (maybe 1-3 external links per). The question is what to do with the job page instead of returning a 404. For business purposes, the client cannot display the content after the job is no longer available. To avoid duplicate content issues, the old job page should have some kind of unique content saying the job is longer available. Any thoughts on what to do with those old job pages? Or would you argue that it is appropriate to return 404 header plus error page since this job is truly no longer a valid page on the site? Thanks for any insights you can offer.
Technical SEO | | Matthew_Edgar
Matthew1 -
Too many on page links
Hello I have about 800 warnings with this. Example of one url with this problem is: http://www.theprinterdepo.com/clearance?dir=asc&order=price I was checking and I think all links are important. But I suppose that if I put a nofollow on the links on the left which are only for navigation purposes I can get rid of these warnings. Any other idea?
Technical SEO | | levalencia10 -
Best way to address duplicate news sections within site
A client has a news section at www.clientsite.com/news and also at subdomain.clientsite.com/news. The stories within each section are identical: www.clientsite.com/news/story-11-5-2011 subdomain.clientsite.com/news/story-11-5-2011 What's the best way to avoid a duplicate content issue within the site? A 301 redirect doesn't seem appropriate from the user experience point of view. Is applying a rel=canonical <www.clientsite.com news="" story-a-b-c="">to each story within the subdomain news section the best option? They have 100's of stories, wondering if there might be an easier way?</www.clientsite.com> Also, the news pages list the story headline and the first 3 lines of copy. Do these summaries present duplicate content issues with the full story page? Thank you!
Technical SEO | | 540SEO0