Any SEO-wizards out there who can tell me why Google isn't following the canonicals on some pages?
-
Hi,
I am banging my head against the wall regarding the website of a costumer: In "duplicate title tags" in GSC I can see that Google is indexing a whole bunch parametres of many of the url's on the page. When I check the rel=canonical tag, everything seems correct. My costumer is the biggest sports retailer in Norway. Their webshop has approximately 20 000 products. Yet they have more than 400 000 pages indexed by Google.
So why is Google indexing pages like this? What is missing in this canonical?https://www.gsport.no/herre/klaer/bukse-shorts?type-bukser-334=regnbukser&order=price&dir=descWhy isn't Google just cutting off the ?type-bukser-334=regnbukser&order=price&dir=desc part of the url?Can it be the canonical-tag itself, or could the problem be somewhere in the CMS?
Looking forward to your answers
- Sigurd
-
Thank you all! I have forwarded this to the owner of the page, so now we'll just sit back and see the effects
-
Hi Inevo,
David and Jake's comments and recommendations are spot on correct. You need to update your robots.txt file. Jake is correct when he said "just because a canonical tag is in place, that doesn't prevent Google from crawling and indexing the page."
Sincerely,
Dana
-
Hi Inevo,
Canonical tags are being used correctly and it doesn't actually look like any of the URLs with query strings are indexed in Google.
I'm going to go off the topic of canonicals now, but still related to the crawl and index of the site:
Has the site changed CMS in the last year or two? It's possible that some of the 400k URLs indexed are old or were not canonicalized properly at some point in time, so they were indexed.
The problem with how the site it currently setup is that it is basically impossible for search engines to crawl because of the product filter. I wrote an article about this a while ago (link), specifically to do with product filters in Magento. Product filters can turn your site into a 'black hole' for search engines - which is definitely happening in this case (try crawling it with Screaming Frog).
I'd recommend blocking product filter URLs from being crawled so that search engines are only crawling important pages on the site.
You should be able to fix this be adding these 3 lines to your Robots.txt:
Disallow: *?
Disallow: *+
Allow: *?p=(Note: please check that you don't need to add more parameters to Allow)
These changes will make crawling your site much more efficient - from millions of crawlable URLs, to probably 30-35k.
Let me know how this goes for you
Cheers,
David
-
I would definitely check to make sure the canonical tag is being properly used. Make sure it is an absolute url vs. a relative url.
That being said, please note that just because a canonical tag is in place, that doesn't prevent Google from crawling and indexing the page, and including the page in search results with the site:domain command. If you see the canonicalized URLs outranking their canonical, then you can start to question why Google isn't honoring the canonical.
Please note that canonical tags are a recommendation and not a directive.. meaning Google doesn't have to honor them if they do not feel the page is truly a canonical.
-Jake
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google page speed testing failures
When running our domain through google's page speed insights I'm getting the error message 'An error occurred while fetching or analyzing the page' at around 65% load. I'm concerned it's affecting our organic rankings. The domain is https://www.scottscastles.com/ When testing in https://testmysite.withgoogle.com/ it is also failing at around 70% with the message 'It's taking longer than expected. You can leave this tab open and check back in a little while. We'll have your results soon.' but the results never come. I've tried testing on a few different speed testing sites without failures (https://tools.pingdom.com, https://gtmetrix.com, https://www.webpagetest.org and a few others). We’re stumped as everything appears correct and was working but now isn't. Is this Google or us, or a combination of the two? Any help greatly appreciated!
Technical SEO | | imaterus0 -
Does Google read dynamic canonical tags?
Does Google recognize rel=canonical tag if loaded dynamically via javascript? Here's what we're using to load: <script> //Inject canonical link into page head if (window.location.href.indexOf("/subdirname1") != -1) { canonicalLink = window.location.href.replace("/kapiolani", ""); } if (window.location.href.indexOf("/subdirname2") != -1) { canonicalLink = window.location.href.replace("/straub", ""); } if (window.location.href.indexOf("/subdirname3") != -1) { canonicalLink = window.location.href.replace("/pali-momi", ""); } if (window.location.href.indexOf("/subdirname4") != -1) { canonicalLink = window.location.href.replace("/wilcox", ""); } if (canonicalLink != window.location.href) { var link = document.createElement('link'); link.rel = 'canonical'; link.href = canonicalLink; document.head.appendChild(link); } script>
Technical SEO | | SoulSurfer80 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
Link in my blog to external pages - no follow?
Hi everyone. I'm quite new to SEO (started 6 months ago, but I'm very lucky to work the company that is willing to pay while I'm learning). I also create some content for the website - I write wood flooring industry blog. Following some advise from few SEO experts around the world whom I follow I have decided to write 1-2 off topic blogs/month but I have found the way of writing off topic but in reality bringing together wood flooring industry and let's say fashion industry. My question is - if I want to link to one or two pages (let's say bug fashion brands) shall I use no-follow link? I do not want to harm my website or theirs. Can those type of posts be the part of building my position within my industry and in general, building authority of my website? Of course appart from getting links TO my website. Tom
Technical SEO | | ESB0 -
Pages removed from Google index?
Hi All, I had around 2,300 pages in the google index until a week ago. The index removed a load and left me with 152 submitted, 152 indexed? I have just re-submitted my sitemap and will wait to see what happens. Any idea why it has done this? I have seen a drop in my rankings since. Thanks
Technical SEO | | TomLondon0 -
How can I see the SEO of a URL? I need to know the progress of a specific landing-page of my web. Not a keyword, an url please. Thanks.
I need to know the evolution on SEO of a specific landing-page (an URL) of my web. Not a keyword, a url. Thanks. (Necesito saber si es posible averiguar el progreso de una URL específica en el posicionamiento de Google. Es decir, lo que hace SEOmoz con las palabras clave pero al revés. Yo tengo una url concreta que quiero posicionar en las primeras posiciones de Google pero quiero ver cómo va progresando en función a los cambios que le voy aplicando. Muchas gracias)
Technical SEO | | online_admiral0 -
Adding Rel Canonical to multiple pages
Hi, Our CMS generates a lot of duplicate content, (Different versions of every page for 3 different font sizes). There are many other reasons why we should drop this current CMS and go with something else, and we are in the process of doing that. But for now, does anyone know how would I do the following: I've created a spreadsheet that contains the following: Column 1: rel="canonical" tag for URL Column 2: Duplicate Content URL # 1 Column 3: Duplicate Content URL # 2 Column 4: Duplicate Content URL # 3 I want to add the tag from column 1 into the head of every page from column 2,3, and 4. What would be a fast way to do this considering that I have around 1800 rows. Check the screenshot of the builtwith.com result to see more information about the website if that helps. Farris bxySL
Technical SEO | | jdossetti0 -
How can I get unimportant pages out of Google?
Hi Guys, I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages? Thanks! Ramon
Technical SEO | | DennisForte0