Development Website Duplicate Content Issue
-
Hi,
We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live.
In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it.
This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file.
Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks.
I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found.
When I do find the dev site in Google it displays this;
Roller Banners Cheap » admin
<cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
-
Glad that helped, Lewis.
Unfortunately, there's really no way to determine how long the 301-redirect process will take to get the URLs out of the SERPs. That's entirely up to the search engines and I've never seen much consistency to how long this takes for different cases.
One other thing you could do to try to help speed the process is to add an xml sitemap to the dev site, and verify it in both Webmaster Tools. (Only do this AFTER you have added the metarobots no-index tag to the remaining pages headers!) This will help remind the crawlers of the dev pages, and hopefully get the crawlers to visit them sooner, thereby noticing the redirects and individual no-indexes, and taking action on them sooner.
Personally, I'd let the process run for 2 or 3 weeks after the dev pages get re-indexed without the robots.txt. If the pages are gone, job done. If not, at that point I'd re-evaluate how much damage is being done by still having the dev site in the SERPs. If the damage is heavy, I'd be seriously tempted to use the URL Removal Tool in Bing & Google Webmaster Tools to get them out of the results so I could move on with building the authority of the primary domain (even though that would throw away the value the dev pages have built up).
REMEMBER! Once you've removed the robots.txt no-index, the metatitles and especially metadescriptions of the DEV site are what will, at least temporarily, be showing in the SERPs once the pages get re-indexed. So make certain they have been fully optimised as if they were the real site. That way at least in the near terms you'll still be attracting good traffic while waiting for the pages to hopefully drop out. This may allow even the dev pages to do well enough at bringing traffic that you can afford to wait until they drop out naturally.
**As far as seeing the additional 70 or so pages that are indexed, as Dan says, at the bottom of the search page is this paragraph and link:
_In order to show you the most relevant results, we have omitted some entries very similar to the 3 already displayed.
If you like, you can repeat the search with the omitted results included. _When you click on that link, you'll see the additional pages. This is called the supplemental index and usually means these pages aren't showing up very well in the results anyway. Which means that for most of them, it will sufficient to make sure you've added the metarobots no-index tag to their page headers to just get them removed from the index to avoid future problems.
Does all that make sense?
Paul
-
Thanks for the confirmation, Dan!
As for the process of verifying the subdomain in order to remove it using Webmaster Tools - I covered that as the last point in option 2
Paul
-
Hi Lewis
Be sure to register the dev subdomain as a separate website with webmaster tools. then do the URL removal from the dev subdomain site profile. I've seen this method work as quickly as a few days.
You can see the other pages in the index by selecting "repeat the search with the omitted results included".
-Dan
-
Wow thanks Paul, great and thorough answer!
The only thing I'll add - in terms of doing a URL removal for the subdomain;
-
you have to first verify the subdomain as a totally separate website in webmaster tools. WMT looks at all different subdomains, and even httpS as different website. so register that.
-
THEN you can remove the entire subdomain, using the wmt subdomain profile.
-Dan
-
-
Hi Paul,
Firstly i want to thank you for the great effort you have put into answering my question.
I have changed the robots txt file by going to Settings > Privacy > allow SERPs
Do you know how long this may take to remove the dev site from the search engines?
Also when I search site:dev.roller banners cheap . co . uk in google i only see 3 pages being indexed so unable to see the other 70?
Thanks
-
requires http i believe
-
I think the root of your problem comes from a common misconception about the robots.txt file, Lewis.
A robots.txt no-index directive is NOT designed to get pages removed from the search index. It simply tells the crawler: "when you encounter this directive, don't crawl any further". So the crawler never even gets a chance to discover whether there are any further pages, never mind whether they might be in the index already
THEREFORE! Any pages that are already in the index will simply stay there. (And if any outside sources have links to internal pages behind a robots.txt no-index directive, those linked pages' URLS will often be added to the search index anyway!) Any pages which are in the index this way will have their meta-descriptions blocked from displaying by the robots.txt directive, as you are seeing in your case.
Since a robots.txt no-index directive stops the crawler from looking any deeper, the engines are blocked from actually discovering the 301 redirects on your dev pages, and so aren't getting the cue to drop them in favour of the new pages! Hence the dev site stays in the index and shows up in SERPs. The human user does get the redirect so ends up on the new page, but you still have the duplicate content/competition problem.
NOTE: to actually tell the search engines not only to not index the page, but to remove it if it already exists, you must add a meta-no-index tag in the header of individual pages. The robots.txt no-index MUST NOT be in place in order for this tag to be discovered and obeyed. There is an automatic setting in WordPress Settings -> Reading page to disallow crawling which automatically adds the meta-no-index tag to each page's header
Unfortunately, the problem is bigger than you stated, as I'm finding almost 70 pages from the dev site indexed in the.co.uk SERPs
Here are what I see as your two main options, along with their ramifications:
1. Remove the robots.txt no-index directive and allow the 301 redirects to be crawled, eventually causing the dev pages to drop out of the SERPS
- this would be the preferred option if the existing dev site pages have actually started to acquire incoming links and ranking value, but you'd have no control over how long it would take for the competing dev pages to drop out of the index, meaning they will continue to interfere with your SEO until that process completes
- you'll need to check whether any of the other 70 pages in the results have incoming links and if so 301 redirect them as well
- you'll need to add meta-robots no-index tags to the header of each of the remaining non-redirected pages on the dev site to get them removed from the index.
**2. ** Use the URL Removal Tool in Google and Bing Webmaster Tools to have the dev site removed from the index
- likely the fastest way to get the competing URLs out of the indexes, but would mean that any acquired link authority from the dev pages would be lost, not transferred to the live site.
- would still require either the robots.txt no-index directive to stay in place, or better yet, remove it and replace it with meta-no index tags in the header of every page on the dev site.
- you'd need to remove the 301 redirects
- since the search engines consider subdomains completely separate sites, you'd need to set up and verify the dev subdomain as a separate site in both Google and Bing webmaster tools in order for the URL Removal Tool to work.
I've never actually used the URL Removal tool on a full subdomain before, but see no reason why it wouldn't work as expected. You could actually test it out first on your dev.birdybanners.co.uk/ site as it has the same problem of the dev site being indexed in the SERPs.
Hope that helps give you a strategy to resolve the problem? Be sure to holler if you need me to better clarify anything.
Paul
-
Hi Andy,
Thanks for your response.
When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk.
I want to remove a sub domain not a page, are you able to assist?
-
in GWT ensure you have removed the directory / subdomain from listings / index. (under optimisation > remove urls).
May take a week to kick in but if your 301s are working and robots is in place it will work.
In addition to these ensure you are using canonical tags pointing the the live location not dev.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content w/ same URLs
I am getting high priority issues for our privacy & terms pages that have the same URL. Why would this show up as duplicate content? Thanks!
Technical SEO | | RanvirGujral0 -
Cloud Hosting and Duplicate content
Hi I have an ecommerce client who has all their images cloud hosted (amazon CDN) to speed up site. Somehow it seems maybe because the pinned the images on pinterest but the CDN got indexed and there now seems to be about 50% of the site duplicated (about 2500 pages eg: http://d2rf6flfy1l.cloudfront.net..) Is this a problem with duplicate content? How come Moz doesnt show it up as crawl errors? Why is thisnot a problem that loads of people have?I only found a couple of mentions of such a prob when I googled it.. any suggestion will be grateful!
Technical SEO | | henya0 -
Duplicate Page Content
Hi, I just had my site crawled by the seomoz robot and it came back with some errors. Basically it seems the categories and dates are not crawling directly. I'm a SEO newbie here Below is a capture of the video of what I am talking about. Any ideas on how to fix this? Hkpekchp
Technical SEO | | mcardenal0 -
Duplicate content vs. less content
Hi, I run a site that is currently doing very well in google for the terms that we want. We are 1,2 or 3 for our 4 targeted terms, but havent been able to jump to number one in two categories that I would really like to. In looking at our site, I didn't realize we have a TON of duplicate content as seen by SEO moz and I guess google. It appears to be coming from our forum, we use drupal. RIght now we have over 4500 pages of duplicate content. Here is my question: How much is this hurting us as we are ranking high. Is it better to kill the forum (which is more community service than business) and have a very tight site SEO-wise, or leave the forum even with the duplicate content. Thanks for your help. Erik
Technical SEO | | SurfingNosara0 -
Need help with Joomla duplicate content issues
One of my campaigns is for a Joomla site (http://genesisstudios.com) and when my full crawl was done and I review the report, I have significant duplicate content issues. They seem to come from the automatic creation of /rss pages. For example: http://www.genesisstudios.com/loose is the page but the duplicate content shows up as http://www.genesisstudios.com/loose/rss It appears that Joomla creates feeds for every page automatically and I'm not sure how to address the problem they create. I have been chasing down duplicate content issues for some time and thought they were gone, but now I have about 40 more instances of this type. It also appears that even though there is a canonicalization plugin present and enabled, the crawl report shows 'false' for and rel= canonicalization tags Anyone got any ideas? Thanks so much... Scott | |
Technical SEO | | sdennison0 -
Duplicate Content from Google URL Builder
Hello to the SEOmoz community! I am new to SEOmoz, SEO implementation, and the community and recently set up a campaign on one of the sites I managed. I was surprised at the amount of duplicate content that showed up as errors and when I took a look in deeper, the majority of errors were caused by pages on the root domain I put through Google Analytics URL Builder. After this, I went into webmaster tools and changed the parameter handling to ignore all of the tags the URL Builder adds to the end of the domain. SEOmoz recently recrawled my site and the errors being caused by the URL Builder are still being shown as duplicates. Any suggestions on what to do?
Technical SEO | | joshuaopinion0 -
Why are my pages getting duplicate content errors?
Studying the Duplicate Page Content report reveals that all (or many) of my pages are getting flagged as having duplicate content because the crawler thinks there are two versions of the same page: http://www.mapsalive.com/Features/audio.aspx http://www.mapsalive.com/Features/Audio.aspx The only difference is the capitalization. We don't have two versions of the page so I don't understand what I'm missing or how to correct this. Anyone have any thoughts for what to look for?
Technical SEO | | jkenyon0 -
Website Ranking Issue
Hi, We have been performing our own onsite of offsite SEO along with external assistance and have ranked well over the years with minimal impact from Google updates. Howevr the last so called Panda update has affected us heavily pushing our main phrase 'web design melbourne' from 2nd to 7th where we have been for almost 2 months now on Google.com.au irrespective of onsite or offsite work. We have been trying to find signs of any onsite, IP, duplicate content, titles or other issues that may be holding us back to no avail. The only flag that Google webmaster tools is showing is a number of bad internal site links, which I think is a glitch with the CMS we are using. Even the SEO MOZ tool gives us a higher ranking compared to most competitors on page 1 of Google.com.au for our main phrase. The biggest difference between us and competitors is we chose to target an internal page specific to the topic rather than our homepage. With this sadi we have also reduced our keyword density and content quantity inline with the other sites homepages. Can anyone help shed some light on this? and perhaps something obvious that we have missed, or where we should be looking? Thanks.
Technical SEO | | paulsid0