Best way to handle different views of the same page?
-
Say I have a page: mydomain.com/page
But I also have different views:
/?sort=alpha
/print-version
/?session_ID=2892
etc. All same content, more or less.
Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both?
Thanks!
-
I generally trust Duane, so I'd take it at some value - I just haven't seen that problem pop up much, practically. Theoretically, you'd create a loop - so, if it leaked, it would keep looping/leaking until no juice was left. That seems like an odd way to handle the issue.
My bigger concern would be the idea that, if you rel-canonical every page, Bing might not take your important canonical tags seriously. They've suggested they do this with XML sitemaps, too - if enough of the map is junk, they may ignore the whole thing. Again, I haven't seen any firm evidence of this, but it's worth keeping your eyes open.
-
What do you think about what Duane said, about assigning value to itself, could this be a LJ leak as it would be a leak if it was assigning value to anouther page?
-
I haven't seen evidence they'll lose trust yet, but it's definitely worth noting. Google started out saying that, too, but then eased up, because they realized it was hard enough to implement canonical tags even close to correctly (without adding new restrictions). I agree that, in a perfect world, it shouldn't just be a Band-aid.
-
I am not sure if SEOMoz will, but search engines wont as it wont be in their index.
-
Thanks gentlemen. I will probably just go with the NOINDEX in the robots meta tag and see how that works.
Interesting side note, SEOmoz will still report this as a duplicate page though ;-( Hopefully the search engines won't.
-
Yes i agree for most it is probably not going to be a problem, But Duane again yesterday blogged about this, he did say they can live with it. but they dont like it, and the best thing is to fix it. http://www.bing.com/community/site_blogs/b/webmaster/archive/2011/11/29/nine-things-you-need-to-control.aspx
this leaves me in 2 minds, he said that they may lose trust in all your canonicals if they see it over used, this can be a worry if you have used it for its true use elsewhere.
I also worry about lose of link juice, as Duanes words in the first blog post were, "Please pass any value from itself to itself"
does that mean it loses link juice in the process like a normal canonical does?
I myself would fix it anouther way, but this may be a lot of work and bother for some. Thats why I say its a hard one.
-
I'll 80% agree with Alan, although I've found that, in practice, the self-referencing canonical tag is usually fine. It wasn't the original intent, but at worst the search engines ignore it. For something like a session_ID, it can be pretty effective.
I would generally avoid Robots.txt blocking, as Alan said. If you can do a selective META NOINDEX, that's a safer bet here (for all 3 cases). You're unlikely to have inbound links to these versions of your pages, so you don't have to worry too much about link-juice. I just find that Robots.txt can be unpredictable, and if you block tons of pages, the search engines get crabby.
The other option for session_ID is to capture that ID as a cookie or server session, then 301-redirect to the URL with no session_ID. This one gets tricky fast, though, as it depends a lot on your implementation.
Unless you're seeing serious problems (like a Panda smackdown), I'd strongly suggest tackling one at a time, so that you can measure the changes. Large-scale blocking and indexation changes are always tricky, and it's good to keep a close eye on the data. If you try to remove everything at once, you won't know which changes accomplished what (good or bad). It all comes down to risk/reward. If you aren't having trouble and are being proactive, take it one step at a time. If you're having serious problems, you may have to take the plunge all at once.
-
This is a hard one, cannonical is the easy choice, but Bing advises against it, as you should not have a canonical pointing to itself, it could lead to lose of trust in your website. I would not use the robots for this as you lose your flow of link juice
I would try to no-index follow all pages excpt for the true canonical page using meta tags, this means some sort of server side detection of when to place the tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Different meta descriptions for same page
Hi Depending what terms I put into Google I am seeing a different meta description for exactly the same page. I have checked Umbraco CMS and everything seems in working order. Is there a reason this would be happening? Anyone else had trouble like this?
Technical SEO | | TheZenAgency0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
Technical SEO | | zeepartner1 -
Best way to do a site in various regions
I have a client who has 2 primary services in 4 regions He does mold removal and water damage repair. He then serves cincinnati, dayton, columbus, and indianapolis. Before hiring my company he had like 30 domains (keyword based) and had tons and tons of fake google places listings. He actually got a lot of traffic that way. However I will not tolerate that kind of stuff and want to do things the right way. First of all what is the best site approach for this. He wants a site for each service and for each city. indy mold cincy mold dayton mold dayton water etc etc etc In the end he will have 8 sites and wants to expand into other services and regions. I feel like this is not the right way to handle this as he also has another site that is more generic To me the best way to do this is a generic domain with a locations page and a page for each city. The for the Places he would get one account - an address that is hidden since he goes to customer locations, and just multiple city defined regions. He does have an office like address at each city. So should I make him a Places listing for each city or just the one? And of course how should the actual sites be organized? Thanks
Technical SEO | | webfeatseo0 -
Paging Links Code - Best Way?
Currently we are using previous 1 2 3 next for our link to other inventory pages, with some variation of this javascript code javascript:__doPostBack('ctl00$phMain$dlPagesTop$ctl01$lnkPageTop','') . Can search engines even index the other pages with this javascript? Is there a better way to do this?
Technical SEO | | CFSSEO0 -
404 handling the right way
Hi, Currently when a page is not found I make a 301 redirect to a 404 page should I really do a redirect or maybe a rewrite of the 404 page without redirection? Thanks, Asaf
Technical SEO | | AsafY0 -
Page MozRank and MozTrust 0 for Home Page, Makes No Sense?
Hey Mozzers! I'm a bit confused by a site that is showing a 0 for home page MozRank and MozTrust, while its subdomain and root domain metrics look decent (relatively). I am posting images of the page metrics and subdomain metrics to show the disparity: http://i.imgur.com/3i0jq.png http://i.imgur.com/ydfme.png Is it normal to see this type of disparity? The home page has very little inbound links, but the big goose egg has me wondering if there is something else going on. Has anyone else experienced this? Or, does anyone have speculation as to why a home page would have a 0 MozRank while the subdomain metrics look much better? Thanks!
Technical SEO | | ClarityVentures0 -
Page that has no link is being crawled
http://www.povada.com/category/filters/metal:Silver/nstart/1/start/1.htm I have no idea how the above page was even found by google but it seems that it is being crawled and Im not sure where its being found from. Can anyone offer a solution?
Technical SEO | | 13375auc30