Large site with faceted navigation using rel=canonical, but Google still has issues
-
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question.....
"Googlebot found an extremely high number of URLs on your site:"
Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them.
First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x.
So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table.
Thanks for those who take the time to respond in advance.
-
Yes that's a different situation. You're now talking about pagination, which quite rightly, canonicals to parent page is not to be used.
For faceted/filtered navigation it seems like canonical usage is indeed the right way to go about it, given Peter's experience just mentioned above, and the article you linked to that says, "...(in part because Google only indexes the content on the canonical page, so any content from the rest of the pages in the series would be ignored)."
-
As for my situation it worked out quite nicely, I just wasn't patient enough. After about 2 months the issue corrected itself for the most part and I was able to reduce about a million "waste" pages out of the index. This is a very large site so losing a million pages in a handful of categories helped me gain in a whole lot of other areas and spread the crawler around to more places that were important for us.
I also spent some time doing some restructuring of internal linking from some of our more authoritative pages that I believe also assisted with this, but in my case rel="canonical" worked out pretty nicely. Just took some time and patience.
-
I should actually add that Google doesn't condone using rel-canonical back to the main search page or page 1. They allow canonical to a "View All" or a complex mix of rel-canonical and rel=prev/next. If you use rel-canonical on too many non-identical pages, they could ignore it (although I don't often find that to be true).
Vanessa Fox just did a write-up on Google's approach:
http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970
I have to be honest, though - I'm not a fan of Google's approach. It's incredibly complicated, easy to screw up, doesn't seem to work in all cases, and doesn't work on Bing. This is a very complex issue and really depends on the site in question. Adam Audette did a good write-up:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
-
Thanks Dr Pete,
Yes I've used meta no-index on pages that are simply not useful in any way shape or form for Google to find.
I would be hesitant noindexing my filters in question, but it sounds promising that you are backing the canonical approach and there is a latency on reporting. Our PA and DA is extremely high and we get crawled daily, so curious about your measurement tip (inurl) which is a good one!
Many thanks.
Simon
-
I'm working on a couple of cases now, and it is extremely tricky. Google often doesn't re-crawl/re-cache deeper pages for weeks or months, so getting the canonical to work can be a long process. Still, it is generally a very effective tag and can happen quickly.
I agree with others that Robots.txt isn't a good bet. It also tends to work badly with pages that are already indexed. It's good for keeping things out of the index (especially whole folders, for example), but once 1000s of pages are indexed, Robots.txt often won't clean them up.
Another option is META NOINDEX, but it depends on the nature of the facets.
A couple of things to check:
(1) Using site: with inurl:, monitor the faceted navigation pages in the Google index. Are the numbers gradually dropping? That's what you want to see - the GWT error may not update very often. Keep in mind that these numbers can be unreliable, so monitor them daily over a few weeks.
(2) Are there are other URLs you're missing? On a large, e-commerce site, it's entirely possibly this wasn't the only problem.
(3) Did you cut the crawl paths? A common problem is that people canonical, 301-redirect, or NOINDEX, but then nofollow or otherwise cut links to those duplicates. Sounds like a good idea, except that the canonical tag has to be crawled to work. I see this a lot, actually.
-
Did you find a solution for this? I have exactly the same issue and have implemented the rel canonical in exactly the same way.
The issue you are trying to address is improving crawl bandwidth/equity by not letting Google crawl these faceted pages.
I am thinking of Ajax loading in these pages to the parent category page and/or adding nofollow to the links. But the pages have already been indexed, so I wonder if nofollow will have any effect.
Have you had any progress? Any further ideas?
-
Because rel canonical does nothing more than give credit to teh chosen page and aviod duplicat content. it does not tell the SE to stop indexing or redirect. as far as finding the links it has no affect
-
thx
-
OK, sorry I was thinking too many pages, not links.
using no-index will not stop PR flowing, the search engine will still follow the links. -
Yeah that is why I am not real excited about using robots.txt or even a no index in this instance. They are not session ids, but more like:
www.example.com/catgeoryname/a,
www.example.com/catgeoryname/b
www.example.com/catgeoryname/c
etc
which would show all products that start with those letters. There are a lot of other filters too, such as color, size, etc, but the bottom line is I point all those back to just www.example.com/categoryname using rel canonical and am not understanding why it isn't working properly.
-
There are a large number of urls like this because of the way the faceted navigation works and I have considered no index, but somewhat concerned as we do get links to some of these urls and would like to maintain some of that link juice. The warning shows up in Google Webmaster tools when Googlebot finds a large number of urls. The rest of the message reads like this:
"Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site."
rel canonical should fix this, but apparently it is not
-
Check how you are getting these pages.
Robots.txt is not an ideal solution. If Google finds pages in other places, still these pages will be crawled.
Normally print pages won't have link value and you may no index them.
If there are pages with session ids or campaign codes, use canonical if they have link value. Otherwise no index will be good.
-
the rel canonical with stop you getting duplicate content flags, but there is still a large number of pages its not going to hide them.
I have never seen this warning, how many pages are we talking about?, either it is very very high, or they are confusing the crawler.You may need to no index them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Subsections
Hi! I want to know how can I put the URL from a page like that: http://i.imgur.com/qK1NLjq.png?1 I mean: "www.calafate.com › El Chaltén" Is it possible? Thanks!!!
Algorithm Updates | | Seomediabros0 -
Satisfaction survey on Google search results
Anybody else noticing Google satisfaction surveys on long-tail results? I'm only seeing it when there are no ads... 6071fb3341.png
Algorithm Updates | | Propecta1 -
Rich Snippets: rel=”Author” CTR?
Hi everybody, I want to put on my websites the rel="author" to appear in google search with the image of g+ of my profile. Does anyone have statistics or case history on the effects (positive or negative) that this can have on the CTR? Logically I think it should increase CTR, but I'm not sure that is the case for all sectors. Tnks in advance for your answers
Algorithm Updates | | BizonwebItaly0 -
Best practice for cleaning up multiple Google Places listings and multiple Google accounts when logins were lost.
We are an inbound marketing agency, most of our clients are not relying on local seo. I have a pretty good understanding of it when starting fresh but not so much in joining a "movie in progress" kind of scenario. Recently we've brought on two clients who have had their websites in place for awhile, have made small attempts at marketing themselves online over the years and its resulted in multiple Google places listings, variations of the company names (one of them changed their name), worried there are yet more accounts out there they aren't aware of, etc (analytics, and others from well intentioned employees and past service providers - no internal leadership at the company level). In reading Google help forums I'm seeing some recently having their accounts suspended when they try to clean things up - in one case a person setup a new Google account thinking he would start fresh and in trying to claim listings, get rid of duplicates, etc. his account was suspended. What is the CURRENT recommended course of action in situations like these? With all the changes going on with Google, I don't know which route to take and have combed the Internet reading articles about this (including Google's resources) - would like some current real world advise.
Algorithm Updates | | rhgraves651 -
Next big Google update. Andy ideas when?
I'm interested to find out if anyone has and idea of when it will be or good guesses I would be interested to hear your thoughts. I have put a big effort into an SEO upgrade on my site and I'm interested in what impact I can expect when the next big update comes. Assuming I have fixed the issues that had been causing my traffic drop.
Algorithm Updates | | mark_baird0 -
Which Google Snippet to use? Products or Review
A "product" page that includes "user reviews/ratings" and "comparison prices", which Google snippet do/should you use? Thanks in advance
Algorithm Updates | | righty0 -
SEO ANALYSIS ON A NEW SITE
Hi just would like if anyone could help me in provide some seo analysis on a new website http://www.ppilegalservices.co.uk/ main keyword is mis-sold ppi Its a very competitive keyword but not being able to come on google result in long tail keywords as well, Just got ranked on brand keywords like PPI LEGAL Services. Also running out of ideas as to how to create quality content any tips please? many thanks
Algorithm Updates | | conversiontactics0 -
Why am I getting different Google SERP result for same keywords?
Hi Mozzers, I have noticed recently that Google (.com.au) has been serving up different SERP results for the same keywords. For example, one of our main keywords is "Car Loan". One result will show our site as ranking #5 organically from 242,000,000 results. A refresh of this search will then result in our site not ranking at all from 133,000,000 results. We have been noticing this happen only in the last few days & more frustrating is that Google is throwing up the SERP from 133,000,000 results more frequently. Would anyone know why this is occurring? And what can we do, if anything, to ensure we are shown regardless of how many results Google calls from? Is it from recent algo update & will it settle down over time? Any help would be greatly appreciated. (Just to add - I'm not gogged in to Google when completing this test & regularly clear cookies etc so I don't believe its a personalised search issue)
Algorithm Updates | | 360Finance0