Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
-
Good morning Moz...
This is a weird one. It seems to be a "bug" with Google, honest...
We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance
www.three-clearance.co.uk/apple-phones.html
..could be reached via
www.three-clearance.co.uk/apple-phones.html?ref=menu or
www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on.
GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good.
This is the chain of events:
-
Site migrated to new platform following best practice, as far as I can attest to.
-
Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified.
-
URL structure and URIs were maintained 100% (which may be a problem, now)
-
Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I.
-
Run, not walk, to google and do some Fu:
http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI)
Checked BING and it has indexed each root URL once, as it should.
Situation now:
-
Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated.
-
I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment)
I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page.
- The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows.
Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form ) include
A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct
B) Hand-removing the URLs from the index through a page removal request per indexed URL
C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty)
D) Post on SEOMoz because I genuinely can't understand this.
Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation.
Do you?
Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
-
-
GWT numbers sometimes ignore parameter handling, oddly, and can be hard to read. I'm only seeing about 40 indexed pages with "ref" in the URL, which hardly seems disastrous. One note - once the pages get indexed, for whatever reason, de-indexing can take weeks, even if you do everything correctly. Don't change tactics every couple of days, or you're only going to make this worse, long-term. I think canonicals are fine for this, and they should be effective. It just may take Google some time to re-crawl and dis-lodge the pages. You actually may want to create an XML sitemap (for Google only) that just contains the "ref=" pages Google has indexed. This can nudge them to re-crawl and honor the canonical. Otherwise, the pages could sit there forever. You could 301-redirect - it would be perfectly valid in this case, since those URLs have no value to visitors. I wouldn't worry about the Bing sitemaps - just don't include the "ref=" URLs in the Bing maps, and you'll be fine.
-
Monday morning, still the same, still no reset/add parameters buttons in GMWT any more, still not understanding why Google is being so stubborn about this.
3 identical pages in the index, Google ignoring both GWMT URL parameter and canonical meta tag.
Sigh.
-
Nope, nice clean site map that GWMT says provides the right number of URLs with no 404s and no ?ref= links.
It's like Google has always indexed these links separately but for some reason has decided to only show them now they no longer exist..
-
They arent in your xml sitemap are they? You probably generated a new one when you moved the site over... that could possibly be overriding the parameters... maybe... weird...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages not indexed
Hey everyone Despite doing the necessary checks, we have this problem that only a part of the sitemap is indexed.
Technical SEO | | conversal
We don't understand why this indexation doesn't want to take place. The major problem is that only a part of the sitemap is indexed. For a client we have several projects on the website with several subpages, but only a few of these subpages are indexed. Each project has 5 to 6 subpages. They all should be indexed. Project: https://www.brody.be/nl/nieuwbouwprojecten/nieuwbouw-eeklo/te-koop-eeklo/ Mainly subelements of the page are indexed: https://www.google.be/search?source=hp&ei=gZT1Wv2ANouX6ASC5K-4Bw&q=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&oq=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&gs_l=psy-ab.3...30.11088.0.11726.16.13.1.0.0.0.170.1112.8j3.11.0....0...1c.1.64.psy-ab..4.6.693.0..0j0i131k1.0.p6DjqM3iJY0 Do you have any idea what is going wrong here?
Thanks for your advice! Frederik
Digital marketeer at Conversal0 -
Why google does not remove my page?
Hi everyone, last week i add "Noindex" tag into my page, but that site still appear in the organic search. what other things i can do for remove from google?
Technical SEO | | Jorge_HDI0 -
Thousands of 404-pages, duplicate content pages, temporary redirect
Hi, i take over the SEO of a quite large e-commerce-site. After checking crawl issues, there seems to be +3000 4xx client errors, +3000 duplicate content issues and +35000 temporary redirects. I'm quite desperate regarding these results. What would be the most effective way to handle that. It's a magento shop. I'm grateful for any kind of help! Thx,
Technical SEO | | posthumus
boris0 -
Canonical URL on frontpage
I have a site where the CMS system have added a canonical URL on my frontpage, pointing to a subpage on my site. Something like on my domain root.Google is still showing MyDomain.com as the result in the search engines which is good, but can't this approach hurt my ranking? I mean it's basically telling google that my frontpage content is located far down the hierarki, instead of my domain root, which of course have the most authority.
Technical SEO | | EdmondHong87
Something seems to indicate that this could very well be the case, as we lost several placements after moving to this new CMS system a few months ago.0 -
Why is google not deindexing pages with the meta noindex tag?
On our website www.keystonepetplace.com we added the meta noindex tag to category pages that were created by the sorting function. Google no longer seems to be adding more of these pages to the index, but the pages that were already added are still in the index when I check via site:keystonepetplace.com Here is an example page: http://www.keystonepetplace.com/dog/dog-food?limit=50 How long should it take for these pages to disappear from the index?
Technical SEO | | JGar-2203710 -
Google Search Parameters
Couple quick questions. Is using the parameter pws=0 still useful for turning off personalization? Is there a way to set my location as a URL parameter as well? For instance, I want to set my location to United States, can this be done with a URL param the same way as pws=0?
Technical SEO | | nbyloff0 -
Duplicate Page Content
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools? One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
Technical SEO | | JAARON0 -
How can I get a listing of just the URLs that are indexed in Google
I know I can use the site: query to see all the pages I have indexed in Google, but I need a listing of just the URLs. We are doing a site re-platform and I want to make sure every URL in Google has a 301. Is there an easy way to just see the URLs that Google has indexed for a domain?
Technical SEO | | EvergladesDirect0