If Google's index contains multiple URLs for my homepage, does that mean the canonical tag is not working?
-
I have a site which is using canonical tags on all pages, however not all duplicate versions of the homepage are 301'd due to a limitation in the hosting platform. So some site visitors get www.example.com/default.aspx while others just get www.example.com. I can see the correct canonical tag on the source code of both versions of this homepage, but when I search Google for the specific URL "www.example.com/default.aspx" I see that they've indexed that specific URL as well as the "clean" one. Is this a concern... shouldn't Google only show me the clean URL?
-
In most cases, Google does seem to "de-index" the non-canonical URL, if they process they tag. I put in quotes just because, technically, the page is still in Google's index - as soon as it's not showing up at all (including with "site:"), though, I essentially consider that to be de-indexed. If we can't see it, it might as well not be there.
If 301-ing isn't an option, I'd double-check a few things:
(1) Is the non-canonical page ranking for anything (including very long-tail terms)?
(2) Are there any internal links to the non-canonical URL? These can send a strongly mixed signal.
(3) Are there any other mixed signals that might be throwing off the canonical? Examples include canonicals on other pages that contradict this one, 301s/302s that override the canonical, etc.
-
As Digital-Diameter said, the best choice for fixing this problem is a 301. A Canonical tag can eventually lead to the incorrect URL being replaced by the correct one in the SERPs but it is also important to note that the Rel=canonical tag is a suggestion, not a directive. What this means is that the search engines will take it into consideration but may choose not to follow it.
-
Technically, rel=canonical tags can still leave a page indexed, they simply pass authority for Google. From your question I can tell you know this, but I do have to say that 301's are the best way to address this. Blocking a page with robots.txt can help as well, but this just stops Google from crawling a page, the page can still be indexed again.
If you have pages or versions of pages that you do not want indexed you may want to use the no index meta tag. Google's notes here. Be careful though, this will stop these pages from being indexed, but they will still be crawled (though your rel=canonical solution should make this a non-issue).
A few other notes:
In all cases, be sure your internal links point consistently to the URL version you have determined for your home page.
WMT also creates a list of inbound links that are missing or broken. You can use this to help determine any additional 301s that you need.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
My Website's Home Page is Missing on Google SERP
Hi All, I have a WordPress website which has about 10-12 pages in total. When I search for the brand name on Google Search, the home page URL isn't appearing on the result pages while the rest of the pages are appearing. There're no issues with the canonicalization or meta titles/descriptions as such. What could possibly the reason behind this aberration? Looking forward to your advice! Cheers
Technical SEO | | ugorayan0 -
Does Google read dynamic canonical tags?
Does Google recognize rel=canonical tag if loaded dynamically via javascript? Here's what we're using to load: <script> //Inject canonical link into page head if (window.location.href.indexOf("/subdirname1") != -1) { canonicalLink = window.location.href.replace("/kapiolani", ""); } if (window.location.href.indexOf("/subdirname2") != -1) { canonicalLink = window.location.href.replace("/straub", ""); } if (window.location.href.indexOf("/subdirname3") != -1) { canonicalLink = window.location.href.replace("/pali-momi", ""); } if (window.location.href.indexOf("/subdirname4") != -1) { canonicalLink = window.location.href.replace("/wilcox", ""); } if (canonicalLink != window.location.href) { var link = document.createElement('link'); link.rel = 'canonical'; link.href = canonicalLink; document.head.appendChild(link); } script>
Technical SEO | | SoulSurfer80 -
301 Redirect Url Within a Canonical Tag
So this might sounds like a silly question... A client of mine has a duplicate content issue which will be fixed using canonical tags. We are also providing them with an updated URL structure meaning rwe will be having to do lots of 301 redirects. The URL structure is a much larger task that than the duplicate content so i planned to set up the canonicals first. Then it occurred to me id be updating the canonical tags with the urls from the old structure which brings me to my question. Will the canonical tags with the old urls redirect credit to the new urls with the 301? Or should i just wait until we have the new url structure in place and use these new urls in the canonicals? Thanks!
Technical SEO | | NickG-1230 -
Blocked URL parameters can still be crawled and indexed by google?
Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...
Technical SEO | | catalinmoraru0 -
Should we use & or and in our url's?
Example: /Zambia/kasanka-&-bangweulu or /Zambia/kasanka-and-bangweulu which is the better url from the search engines point of view?
Technical SEO | | tribes0 -
Should I worry about these 404's?
Just wondering what the thought was on this. We have a site that lets people generate user profiles and once they delete the profile the page then 404's. I was told there is nothing we can do about those from our developers, but I was wondering if I should worry about these...I don't think they will affect any of our rankings, but you never know so I thought I would ask. Thanks
Technical SEO | | KateGMaker1