Development Website Duplicate Content Issue
-
Hi,
We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live.
In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it.
This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file.
Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks.
I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found.
When I do find the dev site in Google it displays this;
Roller Banners Cheap » admin
<cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
-
Glad that helped, Lewis.
Unfortunately, there's really no way to determine how long the 301-redirect process will take to get the URLs out of the SERPs. That's entirely up to the search engines and I've never seen much consistency to how long this takes for different cases.
One other thing you could do to try to help speed the process is to add an xml sitemap to the dev site, and verify it in both Webmaster Tools. (Only do this AFTER you have added the metarobots no-index tag to the remaining pages headers!) This will help remind the crawlers of the dev pages, and hopefully get the crawlers to visit them sooner, thereby noticing the redirects and individual no-indexes, and taking action on them sooner.
Personally, I'd let the process run for 2 or 3 weeks after the dev pages get re-indexed without the robots.txt. If the pages are gone, job done. If not, at that point I'd re-evaluate how much damage is being done by still having the dev site in the SERPs. If the damage is heavy, I'd be seriously tempted to use the URL Removal Tool in Bing & Google Webmaster Tools to get them out of the results so I could move on with building the authority of the primary domain (even though that would throw away the value the dev pages have built up).
REMEMBER! Once you've removed the robots.txt no-index, the metatitles and especially metadescriptions of the DEV site are what will, at least temporarily, be showing in the SERPs once the pages get re-indexed. So make certain they have been fully optimised as if they were the real site. That way at least in the near terms you'll still be attracting good traffic while waiting for the pages to hopefully drop out. This may allow even the dev pages to do well enough at bringing traffic that you can afford to wait until they drop out naturally.
**As far as seeing the additional 70 or so pages that are indexed, as Dan says, at the bottom of the search page is this paragraph and link:
_In order to show you the most relevant results, we have omitted some entries very similar to the 3 already displayed.
If you like, you can repeat the search with the omitted results included. _When you click on that link, you'll see the additional pages. This is called the supplemental index and usually means these pages aren't showing up very well in the results anyway. Which means that for most of them, it will sufficient to make sure you've added the metarobots no-index tag to their page headers to just get them removed from the index to avoid future problems.
Does all that make sense?
Paul
-
Thanks for the confirmation, Dan!
As for the process of verifying the subdomain in order to remove it using Webmaster Tools - I covered that as the last point in option 2
Paul
-
Hi Lewis
Be sure to register the dev subdomain as a separate website with webmaster tools. then do the URL removal from the dev subdomain site profile. I've seen this method work as quickly as a few days.
You can see the other pages in the index by selecting "repeat the search with the omitted results included".
-Dan
-
Wow thanks Paul, great and thorough answer!
The only thing I'll add - in terms of doing a URL removal for the subdomain;
-
you have to first verify the subdomain as a totally separate website in webmaster tools. WMT looks at all different subdomains, and even httpS as different website. so register that.
-
THEN you can remove the entire subdomain, using the wmt subdomain profile.
-Dan
-
-
Hi Paul,
Firstly i want to thank you for the great effort you have put into answering my question.
I have changed the robots txt file by going to Settings > Privacy > allow SERPs
Do you know how long this may take to remove the dev site from the search engines?
Also when I search site:dev.roller banners cheap . co . uk in google i only see 3 pages being indexed so unable to see the other 70?
Thanks
-
requires http i believe
-
I think the root of your problem comes from a common misconception about the robots.txt file, Lewis.
A robots.txt no-index directive is NOT designed to get pages removed from the search index. It simply tells the crawler: "when you encounter this directive, don't crawl any further". So the crawler never even gets a chance to discover whether there are any further pages, never mind whether they might be in the index already
THEREFORE! Any pages that are already in the index will simply stay there. (And if any outside sources have links to internal pages behind a robots.txt no-index directive, those linked pages' URLS will often be added to the search index anyway!) Any pages which are in the index this way will have their meta-descriptions blocked from displaying by the robots.txt directive, as you are seeing in your case.
Since a robots.txt no-index directive stops the crawler from looking any deeper, the engines are blocked from actually discovering the 301 redirects on your dev pages, and so aren't getting the cue to drop them in favour of the new pages! Hence the dev site stays in the index and shows up in SERPs. The human user does get the redirect so ends up on the new page, but you still have the duplicate content/competition problem.
NOTE: to actually tell the search engines not only to not index the page, but to remove it if it already exists, you must add a meta-no-index tag in the header of individual pages. The robots.txt no-index MUST NOT be in place in order for this tag to be discovered and obeyed. There is an automatic setting in WordPress Settings -> Reading page to disallow crawling which automatically adds the meta-no-index tag to each page's header
Unfortunately, the problem is bigger than you stated, as I'm finding almost 70 pages from the dev site indexed in the.co.uk SERPs
Here are what I see as your two main options, along with their ramifications:
1. Remove the robots.txt no-index directive and allow the 301 redirects to be crawled, eventually causing the dev pages to drop out of the SERPS
- this would be the preferred option if the existing dev site pages have actually started to acquire incoming links and ranking value, but you'd have no control over how long it would take for the competing dev pages to drop out of the index, meaning they will continue to interfere with your SEO until that process completes
- you'll need to check whether any of the other 70 pages in the results have incoming links and if so 301 redirect them as well
- you'll need to add meta-robots no-index tags to the header of each of the remaining non-redirected pages on the dev site to get them removed from the index.
**2. ** Use the URL Removal Tool in Google and Bing Webmaster Tools to have the dev site removed from the index
- likely the fastest way to get the competing URLs out of the indexes, but would mean that any acquired link authority from the dev pages would be lost, not transferred to the live site.
- would still require either the robots.txt no-index directive to stay in place, or better yet, remove it and replace it with meta-no index tags in the header of every page on the dev site.
- you'd need to remove the 301 redirects
- since the search engines consider subdomains completely separate sites, you'd need to set up and verify the dev subdomain as a separate site in both Google and Bing webmaster tools in order for the URL Removal Tool to work.
I've never actually used the URL Removal tool on a full subdomain before, but see no reason why it wouldn't work as expected. You could actually test it out first on your dev.birdybanners.co.uk/ site as it has the same problem of the dev site being indexed in the SERPs.
Hope that helps give you a strategy to resolve the problem? Be sure to holler if you need me to better clarify anything.
Paul
-
Hi Andy,
Thanks for your response.
When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk.
I want to remove a sub domain not a page, are you able to assist?
-
in GWT ensure you have removed the directory / subdomain from listings / index. (under optimisation > remove urls).
May take a week to kick in but if your 301s are working and robots is in place it will work.
In addition to these ensure you are using canonical tags pointing the the live location not dev.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content Issues: Duplicate Content
Hi there
Technical SEO | | Kingagogomarketing
Moz flagged the following content issues, the page has duplicate content and missing canonical tags.
What is the best solution to do? Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/industrial-flooring/ Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/index.php/industrial-flooring Industrial Flooring » IRL Group Ltd
https://irlgroup.co.uk/index.php/industrial-flooring/0 -
Duplicate Content Question
I have a client that operates a local service-based business. They are thinking of expanding that business to another geographic area (a drive several hours away in an affluent summer vacation area). The name of the existing business contains the name of the city, so it would not be well-suited to market 'City X' business in 'City Y'. My initial thought was to (for the most part) 'duplicate' the existing site onto a new site (brand new root domain). Much of the content would be the exact same. We could re-word some things so there aren't entire lengthy paragraphs of identical info, but it seems pointless to completely reinvent the wheel. We'll get as creative as possible, but certain things just wouldn't change. This seems like the most pragmatic thing to do given their goals, but I'm worried about duplicate content. It doesn't feel as though this is spammy though, so I'm not sure if there's cause for concern.
Technical SEO | | stevefidelity0 -
Content relaunch without content duplication
We write great Content for blog and websites (or at least we try), especially blogs. Sometimes few of them may NOT get good responses/reach. It could be the content which is not interesting, or the title, or bad timing or even the language used. My question for the discussion is, what will you do if you find the content worth audience's attention missed it during its original launch. Is that fine to make the text and context better and relaunch it ? For example: 1. Rechristening the blog - Change Title to make it attractive
Technical SEO | | macronimous
2. Add images
3. Check spelling
4. Do necessary rewrite, spell check
5. Change the timeline by adding more recent statistics, references to recent writeups (external and internal blogs for example), change anything that seems outdated Also, change title and set rel=cannoical / 301 permanent URLs. Will the above make the blog new? Any ideas and tips to do? Basically we like to refurbish (:-)) content that didn't succeed in the past and relaunch it to try again. If we do so will there be any issues with Google bots? (I hope redirection would solve this, But still I want to make sure) Thanks,0 -
Website content has been scraped - recommended action
So whilst searching for link opportunities, I found a website that has scraped content from one of our websites. The website looks pretty low quality and doesn't link back. What would be the recommended course of action? Email them and ask for a link back. I've got a feeling this might not be the best idea. The website does not have much authority (yet) and a link might look a bit dodgy considering the duplicate content Ask them to remove the content. It is duplicate content and could hurt our website. Do nothing. I don't think our website will get penalised for it since it was here first and is in the better quality website. Possibly report them to google for scraping? What do you guys think?
Technical SEO | | maxweb0 -
Duplicate Content Issues
We have some "?src=" tag in some URL's which are treated as duplicate content in the crawl diagnostics errors? For example, xyz.com?src=abc and xyz.com?src=def are considered to be duplicate content url's. My objective is to make my campaign free of these crawl errors. First of all i would like to know why these url's are considered to have duplicate content. And what's the best solution to get rid of this?
Technical SEO | | RodrigoVaca0 -
Is duplicate content ok if its on LinkedIn?
Hey everyone, I am doing a duplicate content check using copyscape, and realized we have used a ton of the same content on LinkedIn as our website. Should we change the LinkedIn company page to be original? Or does it matter? Thank you!
Technical SEO | | jhinchcliffe0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
Avoiding duplicate content on internal pages
Lets say I'm working on a decorators website and they offer a list of residential and commercial services, some of which fall into both categories. For example "Internal Decorating" would have a page under both Residential and Commercial, and probably even a 3rd general category of Services too. The content inside the multiple instances of a given page (i.e. Internal Decorating) at best is going to be very similar if not identical in some instances. I'm just a bit concerned that having 3 "Internal Decorating" pages could be detrimental to the website's overall SEO?
Technical SEO | | jasonwdexter0