Large site with faceted navigation using rel=canonical, but Google still has issues
-
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question.....
"Googlebot found an extremely high number of URLs on your site:"
Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them.
First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x.
So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table.
Thanks for those who take the time to respond in advance.
-
Yes that's a different situation. You're now talking about pagination, which quite rightly, canonicals to parent page is not to be used.
For faceted/filtered navigation it seems like canonical usage is indeed the right way to go about it, given Peter's experience just mentioned above, and the article you linked to that says, "...(in part because Google only indexes the content on the canonical page, so any content from the rest of the pages in the series would be ignored)."
-
As for my situation it worked out quite nicely, I just wasn't patient enough. After about 2 months the issue corrected itself for the most part and I was able to reduce about a million "waste" pages out of the index. This is a very large site so losing a million pages in a handful of categories helped me gain in a whole lot of other areas and spread the crawler around to more places that were important for us.
I also spent some time doing some restructuring of internal linking from some of our more authoritative pages that I believe also assisted with this, but in my case rel="canonical" worked out pretty nicely. Just took some time and patience.
-
I should actually add that Google doesn't condone using rel-canonical back to the main search page or page 1. They allow canonical to a "View All" or a complex mix of rel-canonical and rel=prev/next. If you use rel-canonical on too many non-identical pages, they could ignore it (although I don't often find that to be true).
Vanessa Fox just did a write-up on Google's approach:
http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970
I have to be honest, though - I'm not a fan of Google's approach. It's incredibly complicated, easy to screw up, doesn't seem to work in all cases, and doesn't work on Bing. This is a very complex issue and really depends on the site in question. Adam Audette did a good write-up:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
-
Thanks Dr Pete,
Yes I've used meta no-index on pages that are simply not useful in any way shape or form for Google to find.
I would be hesitant noindexing my filters in question, but it sounds promising that you are backing the canonical approach and there is a latency on reporting. Our PA and DA is extremely high and we get crawled daily, so curious about your measurement tip (inurl) which is a good one!
Many thanks.
Simon
-
I'm working on a couple of cases now, and it is extremely tricky. Google often doesn't re-crawl/re-cache deeper pages for weeks or months, so getting the canonical to work can be a long process. Still, it is generally a very effective tag and can happen quickly.
I agree with others that Robots.txt isn't a good bet. It also tends to work badly with pages that are already indexed. It's good for keeping things out of the index (especially whole folders, for example), but once 1000s of pages are indexed, Robots.txt often won't clean them up.
Another option is META NOINDEX, but it depends on the nature of the facets.
A couple of things to check:
(1) Using site: with inurl:, monitor the faceted navigation pages in the Google index. Are the numbers gradually dropping? That's what you want to see - the GWT error may not update very often. Keep in mind that these numbers can be unreliable, so monitor them daily over a few weeks.
(2) Are there are other URLs you're missing? On a large, e-commerce site, it's entirely possibly this wasn't the only problem.
(3) Did you cut the crawl paths? A common problem is that people canonical, 301-redirect, or NOINDEX, but then nofollow or otherwise cut links to those duplicates. Sounds like a good idea, except that the canonical tag has to be crawled to work. I see this a lot, actually.
-
Did you find a solution for this? I have exactly the same issue and have implemented the rel canonical in exactly the same way.
The issue you are trying to address is improving crawl bandwidth/equity by not letting Google crawl these faceted pages.
I am thinking of Ajax loading in these pages to the parent category page and/or adding nofollow to the links. But the pages have already been indexed, so I wonder if nofollow will have any effect.
Have you had any progress? Any further ideas?
-
Because rel canonical does nothing more than give credit to teh chosen page and aviod duplicat content. it does not tell the SE to stop indexing or redirect. as far as finding the links it has no affect
-
thx
-
OK, sorry I was thinking too many pages, not links.
using no-index will not stop PR flowing, the search engine will still follow the links. -
Yeah that is why I am not real excited about using robots.txt or even a no index in this instance. They are not session ids, but more like:
www.example.com/catgeoryname/a,
www.example.com/catgeoryname/b
www.example.com/catgeoryname/c
etc
which would show all products that start with those letters. There are a lot of other filters too, such as color, size, etc, but the bottom line is I point all those back to just www.example.com/categoryname using rel canonical and am not understanding why it isn't working properly.
-
There are a large number of urls like this because of the way the faceted navigation works and I have considered no index, but somewhat concerned as we do get links to some of these urls and would like to maintain some of that link juice. The warning shows up in Google Webmaster tools when Googlebot finds a large number of urls. The rest of the message reads like this:
"Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site."
rel canonical should fix this, but apparently it is not
-
Check how you are getting these pages.
Robots.txt is not an ideal solution. If Google finds pages in other places, still these pages will be crawled.
Normally print pages won't have link value and you may no index them.
If there are pages with session ids or campaign codes, use canonical if they have link value. Otherwise no index will be good.
-
the rel canonical with stop you getting duplicate content flags, but there is still a large number of pages its not going to hide them.
I have never seen this warning, how many pages are we talking about?, either it is very very high, or they are confusing the crawler.You may need to no index them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Not Indexing Pages
Hi there! I have a problem that I was hoping someone could help me with. On google search console, my website does not seem to be indexed well. In fact, even after rectifying problems that Moz's on-demand crawl has pointed out, it still does not become "valid". There are some of the excluded pages that Google has pointed out. I have rectified some of the issues but it doesn't seem to be helping. However, when I submitted the sitemap, it says that the URLs were discoverable, hence I am not sure why they can be discovered but are not deemed "valid". I would sincerely appreciate any suggestions or insights as to how can I go about to solve this issue. Thanks! Screenshot+%28341%29.png Screenshot+%28342%29.png Screenshot+%28343%29.png
Algorithm Updates | | Chowsey0 -
I'm Pulling Hairs! - Duplicate Content Issue on 3 Sites
Hi, I'm an SEO intern trying to solve a duplicate content issue on three wine retailer sites. I have read up on the Moz Blog Posts and other helpful articles that were flooded with information on how to fix duplicate content. However, I have tried using canonical tags for duplicates and redirects for expiring pages on these sites and it hasn't fixed the duplicate content problem. My Moz report indicated that we have 1000s of duplicates content pages. I understand that it's a common problem among other e-commerce sites and the way we create landing pages and apply dynamic search results pages kind of conflicts with our SEO progress. Sometimes we'll create landing pages with the same URLs as an older landing page that expired. Unfortunately, I can't go around this problem since this is how customer marketing and recruitment manage their offers and landing pages. Would it be best to nofollow these expired pages or redirect them? Also I tried to use self-referencing canonical tags and canonical tags that point to the higher authority on search results pages and even though it worked for some pages on the site, it didn't work for a lot of the other search result pages. Is there something that we can do to these search result pages that will let google understand that these search results pages on our site are original pages? There are a lot of factors that I can't change and I'm kind of concerned that the three sites won't rank as well and also drive traffic that won't convert on the site. I understand that Google won't penalize your sites with duplicate content unless it's spammy. So If I can't fix these errors -- since the company I work conducts business where we won't ever run out of duplicate content -- Is it worth going on to other priorities in SEO like Keyword research, On/Off page optimization? Or should we really concentrate on fixing these technical issues before doing anything else? I'm curious to know what you think. Thanks!
Algorithm Updates | | drewstorys0 -
Will Google penalize 2 sites for targeting "like" keyword phrases?
I own (2) different websites, one an HTML site that has been live for 20 years and a ecommerce site that has been live for 7 years. We sell custom printed (branded) tents for use at trade shows and other indoor and outdoor events. While our ecomm site targets "trade show" tents our HTML site targets "event" tents. I believe that the keyword phrases are dissimilar enough that targeting "trade show tents" on one site and "event tents" on the other should not cause Google to penalize one or the other or both sites for having similar content. The content is different on both sites. I'm wondering if anyone has experience with, or opinions on, my thoughts... either way. Thanks,
Algorithm Updates | | terry_tradeshowstuff
Terry Hepola0 -
Google update January 2015
Hello, In January 2015, google changed its European Algorithm. The change decreased the ranking of some of our keywords but not all. See article for more evidence in google changing its algorithm. https://www.seroundtable.com/google-update-maybe-19760.html The biggest change was the keyword phrase ‘Wholesale Silver Jewellery’ which we ranked 1 in SERP, but now we’re nowhere to be seen. However, the change didn’t affect our keyword phrase ‘Wholesale Jewellery Silver’, ’Wholesale Silver’ and ‘Wholesale Jewellery. We’ve been through our data and see that all of our ’Silver Jewellery’ keyword phrases are no longer showing in the SERP. Further research has shown that our competitors were also dropped down the rankings for the same keyword phrase. Our question is: Why has this update affected certain keyword phrases, such as ‘silver jewellery’ but not ‘jewellery silver’ and how should we over come this? Additional Information
Algorithm Updates | | SilverStar1
If you type in our company name ‘Mainly Silver’ or ‘mainlysilver’ were still showing in SERP, however if you type ‘mainlysilver jewellery’ we’re no where to be found. We’ve even checked ‘site:mainlysilver.co.uk silver jewellery’ in google search and it returns with ‘no results found’. If you switch the keyword phrase, all our web pages are showing up Our website is - www.mainlysilver.co.uk0 -
Google search analytics position - how is it worked out
In our Google search analytic s graphs total clicks and impressions appear as a sold line on the graph(ie showing a result for each day) Position only shows as an occasional dot or line - not a continuous result for each day) sometimes there are days with no result for position. How do google get these results
Algorithm Updates | | CostumeD0 -
Canonical tag on search.asp resultpage or what to do?
Hi, Im starting out doing SEO on my websites. My issue now is, that I have this searchpage called search.asp where it's possible to search for profiles on my website. When you go to search.asp the page displays all profiles as default, and it's then possible to change things like age, hairlenght and lots of small variables. When you submit the queries, the url would be some linke this:
Algorithm Updates | | KasperGJ
wwww.site.com/search.asp?agefrom=10&ageto=40&haircolor=1&area=Denmark and so... There is thousands of different "urls" it could change to, which is kinda bad in SEO i guess. ATM the title tag is always "Searching for profiles", but i plan to change that, so the searchquery would be part of the title. The problem is, that right now, this page generates tons of dublicate content. So, my issue is, what to do? 1. Should I create a or would that "harm" my site? 2. Other ideas? /Kasper0 -
De-indexed homepage in Google - very confusing.
A website I provide content for has just suffered a de-indexed homepage in Google (not in any of the other search engines) - all the other pages remained indexed as usual. Client asked me what might be the problem and I just couldn't figure it out - no linkbuilding has ever been carried out so clean backlink profile, etc. I just resubmitted it and it's back in its usual place, and has maintained the rankings (and PR) it had before it disappeared a few days ago. I checked WMT and no warnings or issues there. Any idea why this might've happened?
Algorithm Updates | | McTaggart0 -
Google Directory vs DMOZ
What is the difference between the Google Directory and the DMOZ if any?
Algorithm Updates | | BrandonC-2698870