Development Website Duplicate Content Issue
-
Hi,
We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live.
In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it.
This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file.
Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks.
I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found.
When I do find the dev site in Google it displays this;
Roller Banners Cheap » admin
<cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
-
Glad that helped, Lewis.
Unfortunately, there's really no way to determine how long the 301-redirect process will take to get the URLs out of the SERPs. That's entirely up to the search engines and I've never seen much consistency to how long this takes for different cases.
One other thing you could do to try to help speed the process is to add an xml sitemap to the dev site, and verify it in both Webmaster Tools. (Only do this AFTER you have added the metarobots no-index tag to the remaining pages headers!) This will help remind the crawlers of the dev pages, and hopefully get the crawlers to visit them sooner, thereby noticing the redirects and individual no-indexes, and taking action on them sooner.
Personally, I'd let the process run for 2 or 3 weeks after the dev pages get re-indexed without the robots.txt. If the pages are gone, job done. If not, at that point I'd re-evaluate how much damage is being done by still having the dev site in the SERPs. If the damage is heavy, I'd be seriously tempted to use the URL Removal Tool in Bing & Google Webmaster Tools to get them out of the results so I could move on with building the authority of the primary domain (even though that would throw away the value the dev pages have built up).
REMEMBER! Once you've removed the robots.txt no-index, the metatitles and especially metadescriptions of the DEV site are what will, at least temporarily, be showing in the SERPs once the pages get re-indexed. So make certain they have been fully optimised as if they were the real site. That way at least in the near terms you'll still be attracting good traffic while waiting for the pages to hopefully drop out. This may allow even the dev pages to do well enough at bringing traffic that you can afford to wait until they drop out naturally.
**As far as seeing the additional 70 or so pages that are indexed, as Dan says, at the bottom of the search page is this paragraph and link:
_In order to show you the most relevant results, we have omitted some entries very similar to the 3 already displayed.
If you like, you can repeat the search with the omitted results included. _When you click on that link, you'll see the additional pages. This is called the supplemental index and usually means these pages aren't showing up very well in the results anyway. Which means that for most of them, it will sufficient to make sure you've added the metarobots no-index tag to their page headers to just get them removed from the index to avoid future problems.
Does all that make sense?
Paul
-
Thanks for the confirmation, Dan!
As for the process of verifying the subdomain in order to remove it using Webmaster Tools - I covered that as the last point in option 2
Paul
-
Hi Lewis
Be sure to register the dev subdomain as a separate website with webmaster tools. then do the URL removal from the dev subdomain site profile. I've seen this method work as quickly as a few days.
You can see the other pages in the index by selecting "repeat the search with the omitted results included".
-Dan
-
Wow thanks Paul, great and thorough answer!
The only thing I'll add - in terms of doing a URL removal for the subdomain;
-
you have to first verify the subdomain as a totally separate website in webmaster tools. WMT looks at all different subdomains, and even httpS as different website. so register that.
-
THEN you can remove the entire subdomain, using the wmt subdomain profile.
-Dan
-
-
Hi Paul,
Firstly i want to thank you for the great effort you have put into answering my question.
I have changed the robots txt file by going to Settings > Privacy > allow SERPs
Do you know how long this may take to remove the dev site from the search engines?
Also when I search site:dev.roller banners cheap . co . uk in google i only see 3 pages being indexed so unable to see the other 70?
Thanks
-
requires http i believe
-
I think the root of your problem comes from a common misconception about the robots.txt file, Lewis.
A robots.txt no-index directive is NOT designed to get pages removed from the search index. It simply tells the crawler: "when you encounter this directive, don't crawl any further". So the crawler never even gets a chance to discover whether there are any further pages, never mind whether they might be in the index already
THEREFORE! Any pages that are already in the index will simply stay there. (And if any outside sources have links to internal pages behind a robots.txt no-index directive, those linked pages' URLS will often be added to the search index anyway!) Any pages which are in the index this way will have their meta-descriptions blocked from displaying by the robots.txt directive, as you are seeing in your case.
Since a robots.txt no-index directive stops the crawler from looking any deeper, the engines are blocked from actually discovering the 301 redirects on your dev pages, and so aren't getting the cue to drop them in favour of the new pages! Hence the dev site stays in the index and shows up in SERPs. The human user does get the redirect so ends up on the new page, but you still have the duplicate content/competition problem.
NOTE: to actually tell the search engines not only to not index the page, but to remove it if it already exists, you must add a meta-no-index tag in the header of individual pages. The robots.txt no-index MUST NOT be in place in order for this tag to be discovered and obeyed. There is an automatic setting in WordPress Settings -> Reading page to disallow crawling which automatically adds the meta-no-index tag to each page's header
Unfortunately, the problem is bigger than you stated, as I'm finding almost 70 pages from the dev site indexed in the.co.uk SERPs
Here are what I see as your two main options, along with their ramifications:
1. Remove the robots.txt no-index directive and allow the 301 redirects to be crawled, eventually causing the dev pages to drop out of the SERPS
- this would be the preferred option if the existing dev site pages have actually started to acquire incoming links and ranking value, but you'd have no control over how long it would take for the competing dev pages to drop out of the index, meaning they will continue to interfere with your SEO until that process completes
- you'll need to check whether any of the other 70 pages in the results have incoming links and if so 301 redirect them as well
- you'll need to add meta-robots no-index tags to the header of each of the remaining non-redirected pages on the dev site to get them removed from the index.
**2. ** Use the URL Removal Tool in Google and Bing Webmaster Tools to have the dev site removed from the index
- likely the fastest way to get the competing URLs out of the indexes, but would mean that any acquired link authority from the dev pages would be lost, not transferred to the live site.
- would still require either the robots.txt no-index directive to stay in place, or better yet, remove it and replace it with meta-no index tags in the header of every page on the dev site.
- you'd need to remove the 301 redirects
- since the search engines consider subdomains completely separate sites, you'd need to set up and verify the dev subdomain as a separate site in both Google and Bing webmaster tools in order for the URL Removal Tool to work.
I've never actually used the URL Removal tool on a full subdomain before, but see no reason why it wouldn't work as expected. You could actually test it out first on your dev.birdybanners.co.uk/ site as it has the same problem of the dev site being indexed in the SERPs.
Hope that helps give you a strategy to resolve the problem? Be sure to holler if you need me to better clarify anything.
Paul
-
Hi Andy,
Thanks for your response.
When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk.
I want to remove a sub domain not a page, are you able to assist?
-
in GWT ensure you have removed the directory / subdomain from listings / index. (under optimisation > remove urls).
May take a week to kick in but if your 301s are working and robots is in place it will work.
In addition to these ensure you are using canonical tags pointing the the live location not dev.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content question...
I have a high duplicate content issue on my website. However, I'm not sure how to handle or fix this issue. I have 2 different URLs landing to the same page content. http://www.myfitstation.com/tag/vegan/ and http://www.myfitstation.com/tag/raw-food/ .In this situation, I cannot redirect one URL to the other since in the future I will probably be adding additional posts to either the "vegan" tag or the "raw food tag". What is the solution in this case? Thank you
Technical SEO | | myfitstation0 -
Tired of finding solution for duplicate contents.
Just my site was scanned by seomoz and seen lots of duplicate content and titles found. Well I am tired of finding solutions of duplicate content for a shopping site product category page. You can see the screenshot below. http://i.imgur.com/TXPretv.png You can see below in every link its showing "items_per_page=64, 128 etc.". This happened in every category in which I was created. I am already using Canonical add-on to avoid this problem but still it's there. You can check my domain here - http://www.plugnbuy.com/computer-software/pc-security/antivirus-internet-security/ and see if the add-on working correct. I recently submitted my sitemap to GWT, so that's why it's not showing me any report regarding duplicate issues. Please help ME
Technical SEO | | chandubaba0 -
Question about duplicate content in crawl reports
Okay, this one's a doozie: My crawl report is listing all of these as separate URLs with identical duplicate content issues, even though they are all the home page and the one that is http://www.ccisolutions.com (the preferred URL) has a canonical tag of rel= http://www.ccisolutions.com: http://www.ccisolutions.com http://ccisolutions.com http://www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain I will add that OSE is recognizing that there is a 301-redirect on http://ccisolutions.com, but the duplicate content report doesn't seem to recognize the redirect. Also, every single one of our 404-error pages (we have set up a custom 404 page) is being identified as having duplicate content. The duplicate content on all of them is identical. Where do I even begin sorting this out? Any suggestions on how/why this is happening? Thanks!
Technical SEO | | danatanseo1 -
Duplicate content with same URL?
SEOmoz is saying that I have duplicate content on: http://www.XXXX.com/content.asp?ID=ID http://www.XXXX.com/CONTENT.ASP?ID=ID The only difference I see in the URL is that the "content.asp" is capitalized in the second URL. Should I be worried about this or is this an issue with the SEOmoz crawl? Thanks for any help. Mike
Technical SEO | | Mike.Goracke0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Duplicate content, how to solve?
I have about 400 errors about duplicate content on my seomoz dashboard. However I have no idea how to solve this, I have 2 main scenarios of duplication in my site: Scenario 1: http://www.theprinterdepo.com/catalogsearch/advanced/result/?name=64MB+SDRAM+DIMM+MEMORY+MODULE&sku=&price%5Bfrom%5D=&price%5Bto%5D=&category= 3 products with the same title, but different product models, as you can note is has the same price as well. Some printers use a different memory product module. So I just cant delete 2 products. Scenario 2: toners http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-73 http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-75 In this scenario, products have a different title but the same price. Again, in this scenario the 2 products are different. Thank you
Technical SEO | | levalencia10 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0