Large site with faceted navigation using rel=canonical, but Google still has issues
-
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question.....
"Googlebot found an extremely high number of URLs on your site:"
Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them.
First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x.
So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table.
Thanks for those who take the time to respond in advance.
-
Yes that's a different situation. You're now talking about pagination, which quite rightly, canonicals to parent page is not to be used.
For faceted/filtered navigation it seems like canonical usage is indeed the right way to go about it, given Peter's experience just mentioned above, and the article you linked to that says, "...(in part because Google only indexes the content on the canonical page, so any content from the rest of the pages in the series would be ignored)."
-
As for my situation it worked out quite nicely, I just wasn't patient enough. After about 2 months the issue corrected itself for the most part and I was able to reduce about a million "waste" pages out of the index. This is a very large site so losing a million pages in a handful of categories helped me gain in a whole lot of other areas and spread the crawler around to more places that were important for us.
I also spent some time doing some restructuring of internal linking from some of our more authoritative pages that I believe also assisted with this, but in my case rel="canonical" worked out pretty nicely. Just took some time and patience.
-
I should actually add that Google doesn't condone using rel-canonical back to the main search page or page 1. They allow canonical to a "View All" or a complex mix of rel-canonical and rel=prev/next. If you use rel-canonical on too many non-identical pages, they could ignore it (although I don't often find that to be true).
Vanessa Fox just did a write-up on Google's approach:
http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970
I have to be honest, though - I'm not a fan of Google's approach. It's incredibly complicated, easy to screw up, doesn't seem to work in all cases, and doesn't work on Bing. This is a very complex issue and really depends on the site in question. Adam Audette did a good write-up:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
-
Thanks Dr Pete,
Yes I've used meta no-index on pages that are simply not useful in any way shape or form for Google to find.
I would be hesitant noindexing my filters in question, but it sounds promising that you are backing the canonical approach and there is a latency on reporting. Our PA and DA is extremely high and we get crawled daily, so curious about your measurement tip (inurl) which is a good one!
Many thanks.
Simon
-
I'm working on a couple of cases now, and it is extremely tricky. Google often doesn't re-crawl/re-cache deeper pages for weeks or months, so getting the canonical to work can be a long process. Still, it is generally a very effective tag and can happen quickly.
I agree with others that Robots.txt isn't a good bet. It also tends to work badly with pages that are already indexed. It's good for keeping things out of the index (especially whole folders, for example), but once 1000s of pages are indexed, Robots.txt often won't clean them up.
Another option is META NOINDEX, but it depends on the nature of the facets.
A couple of things to check:
(1) Using site: with inurl:, monitor the faceted navigation pages in the Google index. Are the numbers gradually dropping? That's what you want to see - the GWT error may not update very often. Keep in mind that these numbers can be unreliable, so monitor them daily over a few weeks.
(2) Are there are other URLs you're missing? On a large, e-commerce site, it's entirely possibly this wasn't the only problem.
(3) Did you cut the crawl paths? A common problem is that people canonical, 301-redirect, or NOINDEX, but then nofollow or otherwise cut links to those duplicates. Sounds like a good idea, except that the canonical tag has to be crawled to work. I see this a lot, actually.
-
Did you find a solution for this? I have exactly the same issue and have implemented the rel canonical in exactly the same way.
The issue you are trying to address is improving crawl bandwidth/equity by not letting Google crawl these faceted pages.
I am thinking of Ajax loading in these pages to the parent category page and/or adding nofollow to the links. But the pages have already been indexed, so I wonder if nofollow will have any effect.
Have you had any progress? Any further ideas?
-
Because rel canonical does nothing more than give credit to teh chosen page and aviod duplicat content. it does not tell the SE to stop indexing or redirect. as far as finding the links it has no affect
-
thx
-
OK, sorry I was thinking too many pages, not links.
using no-index will not stop PR flowing, the search engine will still follow the links. -
Yeah that is why I am not real excited about using robots.txt or even a no index in this instance. They are not session ids, but more like:
www.example.com/catgeoryname/a,
www.example.com/catgeoryname/b
www.example.com/catgeoryname/c
etc
which would show all products that start with those letters. There are a lot of other filters too, such as color, size, etc, but the bottom line is I point all those back to just www.example.com/categoryname using rel canonical and am not understanding why it isn't working properly.
-
There are a large number of urls like this because of the way the faceted navigation works and I have considered no index, but somewhat concerned as we do get links to some of these urls and would like to maintain some of that link juice. The warning shows up in Google Webmaster tools when Googlebot finds a large number of urls. The rest of the message reads like this:
"Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site."
rel canonical should fix this, but apparently it is not
-
Check how you are getting these pages.
Robots.txt is not an ideal solution. If Google finds pages in other places, still these pages will be crawled.
Normally print pages won't have link value and you may no index them.
If there are pages with session ids or campaign codes, use canonical if they have link value. Otherwise no index will be good.
-
the rel canonical with stop you getting duplicate content flags, but there is still a large number of pages its not going to hide them.
I have never seen this warning, how many pages are we talking about?, either it is very very high, or they are confusing the crawler.You may need to no index them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Next Google PR update
When is next google Pagerank update is expected to arrive.
Algorithm Updates | | csfarnsworth
I know it takes one month to one year for Google to update it but I know many people sitting here at Moz know some secrets for sure.0 -
Is it better to build a large site that covers many verticals or many sites dedicated to each vertical
Just wondering from an seo perspective is it better to build a large site that covers many verticals or build out many sites one for each vertical?
Algorithm Updates | | tlhseo0 -
Why has my homepage been replaced in Google by my Facebook page?
Hi. I was wondering if others have had this happen to them. Lately, I've noticed that on a couple of my sites the homepage no longer appears in the Google SERP. Instead, a Facebook page I've created appears in the position the homepage used to get. My subpages still get listed in Google--just not the homepage. Obviously, I'd prefer that both the homepage and Facebook page appear. Any thoughts on what's going on? Thanks for your help!
Algorithm Updates | | TuxedoCat0 -
Do you think Google is destroying search?
I've seen garbage in google results for some time now, but it seems to be getting worse. I was just searching for a line of text that was in one of our stories from 2009. I just wanted to check that story and I didn't have a direct link. So I did the search and I found one copy of the story, but it wasn't on our site. I knew that it was on the other site as well as ours, because the writer writes for both publications. What I expected to see was the two results, one above the other, depending on which one had more links or better on-page for the query. What I got didn't really surprise me, but I was annoyed. In #1 position was the other site, That was OK by me, but ours wasn't there at all. I'm almost used to that now (not happy about it and trying to change it, but not doing well at all, even after 18 months of trying) What really made me angry was the garbage results that followed. One site, a wordpress blog, has tag pages and category pages being indexed. I didn't count them all but my guess is about 200 results from this blog, one after the other, most of them tag pages, with the same content on every one of them. Then the tag pages stopped and it started with dated archive pages, dozens of them. There were other sites, some with just one entry, some with dozens of tag pages. After that, porn sites, hundreds of them. I got right to the very end - 100 pages of 10 results per page. That blog seems to have done everything wrong, yet it has interesting stats. It is a PR6, yet Alexa ranks it 25,680,321. It has the same text in every headline. Most of the headlines are very short. It has all of the category and tag and archive pages indexed. There is a link to the designer's website on every page. There is a blogroll on every page, with links out to 50 sites. None of the pages appear to have a description. there are dozens of empty H2 tags and the H1 tag is 80% through the document. Yet google lists all of this stuff in the results. I don't remember the last time I saw 100 pages of results, it hasn't happened in a very long time. Is this something new that google is doing? What about the multiple tag and category pages in results - Is this just a special thing google is doing to upset me or are you seeing it too? I did eventually find my page, but not in that list. I found it by using site:mysite.com in the search box.
Algorithm Updates | | loopyal0 -
Google Algo Update In Que. What consititues over optimization?
http://www.pcmag.com/article2/0,2817,2401732,00.asp According to this, Google is bringing the hammer down soon on another 10-20% of the search results. While we don't advocate keyword stuffing, exchanging links, or anything too risky I am still concerned. Do we know if the example "perfectly optimized page"; http://www.seomoz.org/blog/perfecting-keyword-targeting-on-page-optimization is now going to be penalty bait? Is this over stuffing? Also, how might this effect ecommerce sites in particular?
Algorithm Updates | | iAnalyst.com2 -
What do i need to do to drive more traffic to my site
Hi i built a site around ten months ago but at the moment i am only receiving around 3 visitors a a day. it is a travel site that i write new content for but because i am still learning seo i am not sure what i have been doing wrong. I am not sure of the basics from getting people to the site or the correct way of generating free links or if i should be submitting my site to all these free site submitters which includes to lycos and other free services that you can find on google. If anyone can please explain what i am doing wrong and how to generate more traffic for free then that would be great. also can anyone recommend any free service that would help with increasing traffic and links
Algorithm Updates | | ClaireH-1848860 -
Is there a way to know what rank my site is listed on google ?
My current client web page was listed at the 4th page 1 month ago. Im trying real hard to make him understand that the traffic from beiing on the first page is important and that he need to give me additionnal ressource to make it happen ( i don't prog at all). So i had the idea of checking every page to see whats is current rank. but instead of looking from page 1 to page X, i was wondering if there was something somewhere that could give me my rank right away. It woud help saving time. Thx.
Algorithm Updates | | Promoteam0