Quick Fix to "Duplicate page without canonical tag"?
-
When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment.
If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors?
That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something?
-
No index & test Indexing Before You Launch
The domains are intended for development use and cannot be used for production. A custom or CMS-standard will only work
robots.txt on
Live environments with a custom domain. Adding sub-domains (i.e.,dev.example.com , ``test.example.com
) for DEV or TEST will remove the header only,X-Robots-Tag: noindex
but still, serve the domain.robots.txt
To support pre-launch SEO testing, we allow the following bots access to platform domains:
- Site Auditor by Raven
- SEMrush
- RogerBot by Moz
- Dotbot by Moz
If you’re testing links or SEO with other tools, you may request the addition of the tool to our
robots.txt
Pantheon's documentation on robots.txt: http://pantheon.io/docs/articles/sites/code/bots-and-indexing/User-agent: * Disallow: / User-agent: RavenCrawler User-agent: rogerbot User-agent: dotbot User-agent: SemrushBot User-agent: SemrushBot-SA Allow: /
-
The simplest solution would be to mark every page in your test environment "noindex". This is normally standard operating procedure anyway because most people don't want customers stumbling across the wrong URL in search by mistake and seeing a buggy page that isn't supposed to be "live" for customers.
Updating your robots.txt file would tell Google not to crawl the page, but if they've already crawled it and added it to their index it just means that they will retain the last crawled version of the page and will not crawl it in the future. You have to direct Google to "noindex" the pages. It will take some time as Google refreshes the crawl of each page, but eventually you'll see those errors drop off as Google removes those pages from their index. If I were consulting a client I would tell them to make the change and check back in two or three months.
Hope this helps!
-
The new version of search console will show all the pages available on your site. even the no-index pages, why? I don't know, the truth is even when you set up those pages as no-follow and no-index it will keeping show you the same error. That does not mean that there is something wrong with your site. I would not worry in your case.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Google read dynamic canonical tags?
Does Google recognize rel=canonical tag if loaded dynamically via javascript? Here's what we're using to load: <script> //Inject canonical link into page head if (window.location.href.indexOf("/subdirname1") != -1) { canonicalLink = window.location.href.replace("/kapiolani", ""); } if (window.location.href.indexOf("/subdirname2") != -1) { canonicalLink = window.location.href.replace("/straub", ""); } if (window.location.href.indexOf("/subdirname3") != -1) { canonicalLink = window.location.href.replace("/pali-momi", ""); } if (window.location.href.indexOf("/subdirname4") != -1) { canonicalLink = window.location.href.replace("/wilcox", ""); } if (canonicalLink != window.location.href) { var link = document.createElement('link'); link.rel = 'canonical'; link.href = canonicalLink; document.head.appendChild(link); } script>
Technical SEO | | SoulSurfer80 -
Duplicate pages with "/" and without "/"
I seem to have duplicate pages like the examples below: https://example.com https://example.com/ This is happening on 3 pages and I'm not sure why or how to fix it. The first (https://example.com) is what I want and is what I have all my canonicals set too, but that doesn't seem to be doing anything. I've also setup 301 redirects for each page with "/" to be redirected to the page without it. Doing this didn't seem to fix anything as when I use the (https://example.com/) URL it doesn't redirect to (https://example.com) like it's supposed to. This issue has been going on for some time, so any help would be much appreciated. I'm using Squarespace as the design/hosting site.
Technical SEO | | granitemountain0 -
Does adding a noindex tag reduce duplicate content?
I've been working under the assumption for some time that if I have two (or more) pages which are very similar that I can add a noindex tag to the pages I don't need and that will reduce duplicate content. As far as I know this removes the pages with the tag from Google's index and stops any potential issues with duplicate content. It's the second part of that assumption that i'm now questioning. Despite pages having the noindex tag they continue to appear in Google Search console as duplicate content, soft 404 etc. That is, new pages are appearing regularly that I know to have the noindex tag. My thoughts on this so far are that Google can still crawl these pages (although won't index them) so shows them in GSC due to a crude issue flagging process. I mainly want to know: a) Is the actual Google algorithm sophisticated enough to ignore these pages even through GSC doesn't. b) How do I explain this to a client.
Technical SEO | | ChrisJFoster0 -
Duplication, pagination and the canonical
Hi all, and thank you in advance for your assistance. We have an issue of paginated pages being seen as duplicates by pro.moz crawlers. The paginated pages do have duplicated by content, but are not duplicates of each other. Rather they pull through a summary of the product descriptions from other landing pages on the site. I was planing to use rel=canonical to deal with them, however I am concerned as the paginated pages are not identical to each other, but do feature their own set of duplicate content! We have a similar issue with pages that are not paginated but feature tabs that alter the URL parameters like so: ?st=BlueWidgets ?st=RedSocks ?st=Offers These are being seen as duplicates of the main URL, and again all feature duplicate content pulled from elsewhere in the site, but are not duplicates of each other. Would a canonical tag be suitable here? Many Thanks
Technical SEO | | .egg0 -
Why are my Duplicated Pages not being updated?
I've recently changed a bunch of duplicated pages from our site. I did get a slightly minimized amount of duplicated pages, however, some of the pages that I've already fixed are still unfixed according to MOZ. Whenever I check the back-end of each of these pages, I see that they've already been changed and non of them are the same in terms of Meta Tag Title is concern. Can anyone provide any suggestions on what I should do to get a more accurate result? Is there a process that I'm missing?
Technical SEO | | ckroaster0 -
Wordpress tags and duplicate content?
I've seen a few other Q&A posts on this but I haven't found a complete answer. I read somewhere a while ago that you can use as many tags as you would like. I found that I rank for each tag I used. For example, I could rank for best night clubs in san antonio, good best night clubs in san antonio, great best night clubs in san antonio, top best night clubs in san antonio, etc. However, I now see that I'm creating a ton of duplicate content. Is there any way to set a canonical tag on the tag pages to link back to the original post so that I still keep my rankings? Would future tags be ignored if I did this?
Technical SEO | | howlusa0 -
Job/Blog Pages and rel=canonical
Hi, I know there are several questions and articles concerning the rel=canonical on SEOmoz, but I didn't find the answer I was looking for... We have some job pages, URLs are: /jobs and then jobs/2, jobs/3 etc.. Our blog pages follow the same: /blog, /blog2, /blog/3... Our CMS is self-produced, and every job/blog-page has the same title tag. According to SEOmoz (and the Webmaster Tools), we have a lots of duplicate title tags because of this problem. If we put the rel=canonical on each page's source code, the title tag problem will be solved for google, right? Because they will just display the /job and /blog main page. That would be great because we dont want 40 blog pages in the index. My concern (a stupid question, but I am not sure): if we put the rel=canonical on the pages, does google crawl them and index our job links? We want to keep our rankings for our job offers on pages 2-xxx. More simple: will we find our job offers on jobs/2, jobs/3... in google, if these pages have the rel=canonical on them? AND ONE MORE: does the SEOmoz bot also follow the rel=canonical and then reduce the number of duplicate title-tags in the campaigns??? Thanx........
Technical SEO | | accessKellyOCG0 -
"Too Many On-Page Links" Issue
I'm being docked for too many on page links on every page on the site, and I believe it is because the drop down nav has about 130 links in it. It's because we have a few levels of dropdowns, so you can get to any page from the main page. The site is here - http://www.ibethel.org/ Is what I'm doing just a bad practice and the dropdowns shouldn't give as much information? Or is there something different I should do with the links? Maybe a no-follow on the last tier of dropdown?
Technical SEO | | BethelMedia0