"Google-selected canonical different to user-declared" - issues
-
Hi Moz!
We are having issues on a number of our international sites where Google is choosing our page 2 of a category as the canonical over page 1. Example; https://www.yoursclothing.de/kleider-grosse-groessen (Image attached).
We currently use infinite loading, however when javascript is disabled we have a text link to page 2 which is done via a query string of '?filter=true&view=X&categoryid=X&page=2'
Page 2 is blocked via robots.txt and has a canonical pointing at page 1.
Due to Google selecting page 2 as the canonical, the page is no longer ranking. For the main keyphrase a subcategory page is ranking poorly.
-
Sounds like you had the best of intentions by giving a non-JS fallback but that it came back to bite you
By the way, this gives evidence to something else that I'm always, always banging on about - Google 'can' render JS and do headless browser renders of a web-page when crawling, but they don't do this for everyone and they don't do it all the time (even for sites large enough to warrant such increased crawl resources). Rendered crawling is like 10x slower than basic source code scraping, and Google's mission is to index the web. Obviously they're not going to take a 10x efficiency hit on their MO for just anyone
Sorry about that, needed to get it off my chest as people are always linking articles saying "LOOK! Google can do JS crawling now we don't have to make sure our non-modified source code is solid any more". YES YOU DO - INTERNET
Ok done now. Let's focus on the query at hand
So you have this lovely page here which you have quoted: https://www.yoursclothing.de/kleider-grosse-groessen
It looks like this:
https://d.pr/i/QVNfKR.png (screenshot)
And you can scroll it down, and it infinitely loads - and you only see the bottom of the results (with no page changing button) when results run out, like this:
https://d.pr/i/XECK5Q.png (screenshot)
But when JS is disabled (or if you're fast like some kind of ninja cat, and you scroll down to the bottom of the page and find the button before the infinite load modifies the page-contents... but no mainly, just when JS is disabled) - then you get this button here:
https://d.pr/i/4Y9T9Y.png (screenshot)
... and when you click the button you end up on another page like this one:
https://www.yoursclothing.de/kleider-grosse-groessen?filter=true&view=32&categoryid=3440&page=2
... where you see "&page=2" at the end there, which is the parameter modifier which changes the active page of contents
Google are sometimes choosing the sub-pages of results as canonical when you guys don't want them to do that. You want to know why, what you have done isn't really working and what you could do instead. Got it
IMPORTANT Disclaimer: Google decides to rank pages for a number of reasons. If Google really does feel that sometimes, sub-pages of your results are 'better' (maybe they have better products on some of the paginated URLs, a better mix of products or products which fit Google's idea of fair pricing better than the default feed...) - there is no guarantee that 'correcting' this 'error' will result in the same rankings you have now. I just want to be 100% clear on that point, you might even lose some rankings if Google is really decided. They have told you, they are overriding your choice and usually there's some kind of reason on that. Sometimes it's a 'just past the post' decision where you can correct them and get basically the same rankings on other pages, other times you can lose rankings or they just won't shift it
Still with me? Ok let's look at what you did here:
-
On the page 2 (and page 3, and however many paginated URLs there are) you have a canonical tag pointing to the parent
-
And you have blocked the paginated URLs in robots.txt
I need to start by querying the fact that you say the page 2s (and assumedly other sub pages, like page 3s - e.g: https://www.yoursclothing.de/kleider-grosse-groessen?filter=true&view=32&categoryid=3440&page=3) - are blocked in robots.txt
DeepCrawl's indexation plugin doesn't see them as blocked:
https://d.pr/i/1cRShK.png (screenshot)
It says about the canonical tag, but it says nothing about the robots.txt at all!
So lets look at your robots.txt file:
https://www.yoursclothing.de/robots.txt
https://d.pr/i/YbyEGl.png (screenshot)
Nothing under # BlockSecureAreas handles pagination
But then under # NoIndex we have this entry:
Disallow: /filter=true
That _should _handle it, as pagination never occurs without a filter being applied (at least as far as I can see)
Indeed using this tool that I like, if I just paste in only the relevant parts:
https://d.pr/i/TVafTL.png (screenshot)
**We can see that the block is effective **(so DeepCrawl, your Chrome tool is probably wrong somehow - maybe they will see this new link, read and fix it!)
I did notice that there's some weird, unrequired indentation in your robots.txt file. Could that cause problems for Google? Could it, at the least - make Google think "well if there's syntax errors in here, maybe it's not worth obeying as it's probably wrong" - quite possibly
In my opinion that's not likely to be part of it
So if it's not that, then what!?
Well it could be that you're using robots.txt in the wrong capacity. Robots.txt _doesn't _stop Google from indexing web pages or tell them not to index web-pages (which is why it's funny that you have commented with "# NoIndex" - that's not what robots.txt does!)
Robots.txt dissuades Google from 'crawling' (but not indexing) a URL. If they can find signals from around the web (maybe backlinks) or if they believe the content on the URL is better via other means, they can (and will) still index a URL without necessarily crawling it. Robots.txt does not do, what Meta no-index does (which can be fired through the HTTP header, or via HTML)
Also, riddle me this if you will. If Google isn't allowed to crawl your URLs any more, how will it continue to find your canonical tags and find any new no-index tags? Why give Google a directive (canonical tags) on a URL which Google isn't allowed to crawl, and thus they will never see the directive? Sounds backwards to me
My proposed steps:
-
Read, understand and make your own decision on the "disclaimer" I wrote up earlier in this very post
-
If you still want to go ahead, enact the following (otherwise don't!)
-
Remove the robots.txt block so Google can crawl those URLs, or if that rule covers more than just the paginated URLs - leave it in place but add an exclusion for the paginated URLs so they may be crawled
-
Leave all the canonical tags on, good work. Maybe supplement these with a 'no-index' directive which would tell Google not to index those pages (there is no guarantee the canonical URL will replace the no-indexed URL, but you can try your luck - read the disclaimer)
-
Maybe serve status code 410, only to Googlebot (user-agent) when it visits the paginated URLs specifically - to try and encourage Google to think of those URLs as gone. Leave the contents alone, otherwise it's cloaking. Serve the same content to Google and users, but serve googlebot a 410 (gone) status
-
Before enacting the super-aggressive 410 stance, give Google plenty of time to swallow the new "no-index" tags on paginated URLs which weren't there before. A 410 whilst powerful, may cause these not to be read - so do give Google time (a few weeks IMO)
-
If you do adopt the 410 stance, one down-side will be that Google will think your JS fallback is a broken link and this will appear in Google Search Console. To make this less severe (though it probably still will happen), add no-follow directives to the pagination JS-fallback link / button where it appears
-
Once Google seems to have swallowed your wishes and seems to have removed most of these URLs from their index, THEN put the robots.txt block for paginated URLs back on (so it won't all happen again in the future)
-
Try removing the weird indentation formatting from your robots.txt file
-
Smile
Well, that's it from me. Thanks for this one, it was pretty interesting
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Finding issue on Gtmetrix speed test and google speed test
Hey, when i have tested my website https://www.socprollect-mea.com/ on GT metrix. When i test morning, it shown page speed of 3.8 seconds and in noon it shown 2 seconds and later i check, it is defeclecting. speed on the Google page speed test as well. What kind of error is this and does anyone have the solution for this issue?
On-Page Optimization | | nazfazy0 -
"?inline=true" Duplicate Page
I have a new Drupal client and am getting a duplicate page error and indicates "?inline=true" after the domain as the culprit. Google comes up empty 😞
On-Page Optimization | | JimCoarse0 -
Canonical rel
I am having a few issues understanding the whole report card and canonical issue. I have a wordpress blog www.theseolab.com.au. When i created the blog i had setup http://theseolab.com.au and i thought that was my mistake. When i ran the on page report for www.theseolab.com.au . It said that my canonical was http://theseolab.com. So i changed it and my canonical points to http://www.theseolab.com.au. 5 days later i run the on page again and it still says that there are issues and it still shows that my website canonical is not pointing to the right link. Does it take time to update or am i missing something?
On-Page Optimization | | theseolab0 -
Recommended Min Amount of Content for "News" Bulletins
My company often puts out short news bulletins to announce short news updates. We have to write about these topics for our customers and to remain as an industry leader. However, there is not much real and interesting content to write about these topics. What is the minimum length you think these articles should consist of so that Google won't see them as weak/useless pages and possibly give us a Panda penalty for them?
On-Page Optimization | | theLotter0 -
My text does not show up in Google
Hi there. I've got an urgent question I hope someone can help me with. I've made a website (www.tonyharrismakingcents.com.au) with a few content pages. I don't get a lot of traffic. All my pages are scrawled and I don't see any errors. However, when I copy an entire paragraph and Google it, it does not show up in the search results. This makes me believe that the pages are not scrawled correctly. Only when I search for the exact paragraph by putting it between "", the website shows up on the results page. What can be the reason for this? Thanks for your help..It's much appreciated.
On-Page Optimization | | csrinpractice0 -
Canonical Help?
This canonical thing is brand new to me and I'm trying to wrap my mind around it. Here is my situation: I use Wordpress. I am showing duplicate content with the following url's http://crosstrainingandfitness.com/online-workout-blog/ http://crosstrainingandfitness.com/online-workout-blog/page/2/ Would setting a canonical link solve this? If so, what do I put in the Canonical box for this category (online workout blog). I use Yoast's Wordpress SEO plugin. Any help is greatly appreciated.
On-Page Optimization | | carbbon0 -
Self-Cannibalization issue
Is the keyword "filme online gratis" self-cannibalization on this site filmeonlinenoi.com in the seomoz tool "On-Page Keyword Optimization" it shows that it is a self-cannibalization keyword ... i made some changes (big changes) and its still remaining the same
On-Page Optimization | | Alexsmenaru0