Best way to handle different views of the same page?
-
Say I have a page: mydomain.com/page
But I also have different views:
/?sort=alpha
/print-version
/?session_ID=2892
etc. All same content, more or less.
Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both?
Thanks!
-
I generally trust Duane, so I'd take it at some value - I just haven't seen that problem pop up much, practically. Theoretically, you'd create a loop - so, if it leaked, it would keep looping/leaking until no juice was left. That seems like an odd way to handle the issue.
My bigger concern would be the idea that, if you rel-canonical every page, Bing might not take your important canonical tags seriously. They've suggested they do this with XML sitemaps, too - if enough of the map is junk, they may ignore the whole thing. Again, I haven't seen any firm evidence of this, but it's worth keeping your eyes open.
-
What do you think about what Duane said, about assigning value to itself, could this be a LJ leak as it would be a leak if it was assigning value to anouther page?
-
I haven't seen evidence they'll lose trust yet, but it's definitely worth noting. Google started out saying that, too, but then eased up, because they realized it was hard enough to implement canonical tags even close to correctly (without adding new restrictions). I agree that, in a perfect world, it shouldn't just be a Band-aid.
-
I am not sure if SEOMoz will, but search engines wont as it wont be in their index.
-
Thanks gentlemen. I will probably just go with the NOINDEX in the robots meta tag and see how that works.
Interesting side note, SEOmoz will still report this as a duplicate page though ;-( Hopefully the search engines won't.
-
Yes i agree for most it is probably not going to be a problem, But Duane again yesterday blogged about this, he did say they can live with it. but they dont like it, and the best thing is to fix it. http://www.bing.com/community/site_blogs/b/webmaster/archive/2011/11/29/nine-things-you-need-to-control.aspx
this leaves me in 2 minds, he said that they may lose trust in all your canonicals if they see it over used, this can be a worry if you have used it for its true use elsewhere.
I also worry about lose of link juice, as Duanes words in the first blog post were, "Please pass any value from itself to itself"
does that mean it loses link juice in the process like a normal canonical does?
I myself would fix it anouther way, but this may be a lot of work and bother for some. Thats why I say its a hard one.
-
I'll 80% agree with Alan, although I've found that, in practice, the self-referencing canonical tag is usually fine. It wasn't the original intent, but at worst the search engines ignore it. For something like a session_ID, it can be pretty effective.
I would generally avoid Robots.txt blocking, as Alan said. If you can do a selective META NOINDEX, that's a safer bet here (for all 3 cases). You're unlikely to have inbound links to these versions of your pages, so you don't have to worry too much about link-juice. I just find that Robots.txt can be unpredictable, and if you block tons of pages, the search engines get crabby.
The other option for session_ID is to capture that ID as a cookie or server session, then 301-redirect to the URL with no session_ID. This one gets tricky fast, though, as it depends a lot on your implementation.
Unless you're seeing serious problems (like a Panda smackdown), I'd strongly suggest tackling one at a time, so that you can measure the changes. Large-scale blocking and indexation changes are always tricky, and it's good to keep a close eye on the data. If you try to remove everything at once, you won't know which changes accomplished what (good or bad). It all comes down to risk/reward. If you aren't having trouble and are being proactive, take it one step at a time. If you're having serious problems, you may have to take the plunge all at once.
-
This is a hard one, cannonical is the easy choice, but Bing advises against it, as you should not have a canonical pointing to itself, it could lead to lose of trust in your website. I would not use the robots for this as you lose your flow of link juice
I would try to no-index follow all pages excpt for the true canonical page using meta tags, this means some sort of server side detection of when to place the tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Replication on Search
Hi. We recently created a Christmas category page on our eCommerce website (christowhome.co.uk). Earlier today, I Googled ‘Christow Christmas Silhouette Lights’ (Christow being the name of our website and Christmas silhouette lights being one of the sub-categories we recently created). I was curious to see how the page appeared on search. Bizarrely, the page appeared multiple times on search (if you click on the link above, it should show you the search results). As you can see, multiple meta titles and descriptions have been created for the same page. This is something that is affecting a number of our Christmas category pages. I don't quite understand why this has happened. We recently added filters to the category. Could the filters be responsible? Any idea how I can prevent this from happening? How I can stop Google indexing these weird replica pages? Many thanks, Dave
Technical SEO | | Davden0 -
Is there a way to get a list of all pages of your website that are indexed in Google?
I am trying to put together a comprehensive list of all pages that are indexed in Google and have differing opinions on how to do this.
Technical SEO | | SpodekandCo0 -
Why are only a few of our pages being indexed
Recently rebuilt a site for an auctioneers, however it has a problem in that none of the lots and auctions are being indexed by Google on the new site, only the pages like About, FAQ, home, contact. Checking WMT shows that Google has crawled all the pages, and I've done a "Fetch as Google" on them and it loads up fine, so there's no crawling issues that is standing out. I've set the "URL Parameters" to no effect too. Also built a sitemap with all the lots in, pushed to Google which then crawled them all (massive spike in Crawl rate for a couple days), and still just indexing a handful of pages. Any clues to look into would be greatly appreciated. https://www.wilkinsons-auctioneers.co.uk/auctions/
Technical SEO | | Blue-shark0 -
Switchboard Tags - Multiple desktop pages pointing to one mobile page
I have recently started to implement switchboard tags to connect our mobile and desktop pages, and to ensure that our mobile pages show up in rankings for mobile users. Because our desktop site is much deeper in content than our mobile site, there are a number of desktop pages we would like to have point to one mobile page. However, with the switchboard tags, this poses a problem because it requires multiple rel=canonical tags to be placed on the one mobile page. I'm assuming this will either confuse the search engines, or they will choose to ignore the rel=canonical tag altogether. Any ideas on how to approach this situation other than creating an equivalent mobile version of every desktop page or implementing a user agent detection redirect?
Technical SEO | | JBlank0 -
Best way to implement noindex tags on archived blogs
Hi, I have approximately 100 old blogs that I believe are of interest to web browsers that I'd potentially like to noindex due to the fact that they may be viewed poorly by Google, but I'd like to keep on our website. A lot of the content in the blogs is similar to one another (as we blog about the same topics quite often), which is why I believe it may be in our interests to noindex older blogs that we have newer content for on more recent blogs. Firstly does that sound like a good idea? Secondly, can I use Google Tag Manager to implement noindex tags on specific blog pages? It's a hassle to get the webmaster to add in the code, and I've found no mention of whether you can implement such tags on Tag Manager on the usual SEO blogs. Or is there a better way to implement noindex tags en masse? Thanks!
Technical SEO | | TheCarnage0 -
How can I change the page title "two" (artigos/page/2.html) in each category ?
I have some categories and photo galleries that have more than one page (i.e.: http://www.buffetdomicilio.com/category/artigos and http://www.buffetdomicilio.com/category/artigos/page/2). I think that I must change the tittle and description, but I don't how. I would like to know how can I change the title of each of them without stay with duplicate title and description. Thank you! ahcAORR.jpg
Technical SEO | | otimizador20130 -
Best way to retain banklink values when moving site?
Hi all, I want to get some opinions on what the best practice is when transferring backlink values from an old site to a new one. On the old site, I currently have a product page and this particular product has multiple models all listed on the one singe page. However on the new site, every model of this particular product has its own page. These product model pages would have relatively similar content apart from several key details which differentiates the models. Firstly would you guys recommend this splitting of models of the same product to different pages? If so, my initial thought process is to 301 redirect the old product page to the new model page that is most popular, and adding rel canonical tags to the other model pages. Would you consider this best practice? Or are there better ways I can be doing this to retain backlink values without also getting penalised due to possible content duplication? Thanks! Jac - sent from my manager's account.
Technical SEO | | RuchirP0 -
If a permanent redirect is supposed to transfer SEO from the old page to the new page, why has my domain authority been impacted?
For example, we redirected our old domain to a new one (leaving no duplicate content on the old domain) and saw a 40% decrease in domain authority. Isn't a permanent redirect supposed to transfer link authority to the place it is redirecting to? Did I do something wrong?
Technical SEO | | BlueLinkERP0