Development Website Duplicate Content Issue
-
Hi,
We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live.
In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it.
This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file.
Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks.
I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found.
When I do find the dev site in Google it displays this;
Roller Banners Cheap » admin
<cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
-
Glad that helped, Lewis.
Unfortunately, there's really no way to determine how long the 301-redirect process will take to get the URLs out of the SERPs. That's entirely up to the search engines and I've never seen much consistency to how long this takes for different cases.
One other thing you could do to try to help speed the process is to add an xml sitemap to the dev site, and verify it in both Webmaster Tools. (Only do this AFTER you have added the metarobots no-index tag to the remaining pages headers!) This will help remind the crawlers of the dev pages, and hopefully get the crawlers to visit them sooner, thereby noticing the redirects and individual no-indexes, and taking action on them sooner.
Personally, I'd let the process run for 2 or 3 weeks after the dev pages get re-indexed without the robots.txt. If the pages are gone, job done. If not, at that point I'd re-evaluate how much damage is being done by still having the dev site in the SERPs. If the damage is heavy, I'd be seriously tempted to use the URL Removal Tool in Bing & Google Webmaster Tools to get them out of the results so I could move on with building the authority of the primary domain (even though that would throw away the value the dev pages have built up).
REMEMBER! Once you've removed the robots.txt no-index, the metatitles and especially metadescriptions of the DEV site are what will, at least temporarily, be showing in the SERPs once the pages get re-indexed. So make certain they have been fully optimised as if they were the real site. That way at least in the near terms you'll still be attracting good traffic while waiting for the pages to hopefully drop out. This may allow even the dev pages to do well enough at bringing traffic that you can afford to wait until they drop out naturally.
**As far as seeing the additional 70 or so pages that are indexed, as Dan says, at the bottom of the search page is this paragraph and link:
_In order to show you the most relevant results, we have omitted some entries very similar to the 3 already displayed.
If you like, you can repeat the search with the omitted results included. _When you click on that link, you'll see the additional pages. This is called the supplemental index and usually means these pages aren't showing up very well in the results anyway. Which means that for most of them, it will sufficient to make sure you've added the metarobots no-index tag to their page headers to just get them removed from the index to avoid future problems.
Does all that make sense?
Paul
-
Thanks for the confirmation, Dan!
As for the process of verifying the subdomain in order to remove it using Webmaster Tools - I covered that as the last point in option 2
Paul
-
Hi Lewis
Be sure to register the dev subdomain as a separate website with webmaster tools. then do the URL removal from the dev subdomain site profile. I've seen this method work as quickly as a few days.
You can see the other pages in the index by selecting "repeat the search with the omitted results included".
-Dan
-
Wow thanks Paul, great and thorough answer!
The only thing I'll add - in terms of doing a URL removal for the subdomain;
-
you have to first verify the subdomain as a totally separate website in webmaster tools. WMT looks at all different subdomains, and even httpS as different website. so register that.
-
THEN you can remove the entire subdomain, using the wmt subdomain profile.
-Dan
-
-
Hi Paul,
Firstly i want to thank you for the great effort you have put into answering my question.
I have changed the robots txt file by going to Settings > Privacy > allow SERPs
Do you know how long this may take to remove the dev site from the search engines?
Also when I search site:dev.roller banners cheap . co . uk in google i only see 3 pages being indexed so unable to see the other 70?
Thanks
-
requires http i believe
-
I think the root of your problem comes from a common misconception about the robots.txt file, Lewis.
A robots.txt no-index directive is NOT designed to get pages removed from the search index. It simply tells the crawler: "when you encounter this directive, don't crawl any further". So the crawler never even gets a chance to discover whether there are any further pages, never mind whether they might be in the index already
THEREFORE! Any pages that are already in the index will simply stay there. (And if any outside sources have links to internal pages behind a robots.txt no-index directive, those linked pages' URLS will often be added to the search index anyway!) Any pages which are in the index this way will have their meta-descriptions blocked from displaying by the robots.txt directive, as you are seeing in your case.
Since a robots.txt no-index directive stops the crawler from looking any deeper, the engines are blocked from actually discovering the 301 redirects on your dev pages, and so aren't getting the cue to drop them in favour of the new pages! Hence the dev site stays in the index and shows up in SERPs. The human user does get the redirect so ends up on the new page, but you still have the duplicate content/competition problem.
NOTE: to actually tell the search engines not only to not index the page, but to remove it if it already exists, you must add a meta-no-index tag in the header of individual pages. The robots.txt no-index MUST NOT be in place in order for this tag to be discovered and obeyed. There is an automatic setting in WordPress Settings -> Reading page to disallow crawling which automatically adds the meta-no-index tag to each page's header
Unfortunately, the problem is bigger than you stated, as I'm finding almost 70 pages from the dev site indexed in the.co.uk SERPs
Here are what I see as your two main options, along with their ramifications:
1. Remove the robots.txt no-index directive and allow the 301 redirects to be crawled, eventually causing the dev pages to drop out of the SERPS
- this would be the preferred option if the existing dev site pages have actually started to acquire incoming links and ranking value, but you'd have no control over how long it would take for the competing dev pages to drop out of the index, meaning they will continue to interfere with your SEO until that process completes
- you'll need to check whether any of the other 70 pages in the results have incoming links and if so 301 redirect them as well
- you'll need to add meta-robots no-index tags to the header of each of the remaining non-redirected pages on the dev site to get them removed from the index.
**2. ** Use the URL Removal Tool in Google and Bing Webmaster Tools to have the dev site removed from the index
- likely the fastest way to get the competing URLs out of the indexes, but would mean that any acquired link authority from the dev pages would be lost, not transferred to the live site.
- would still require either the robots.txt no-index directive to stay in place, or better yet, remove it and replace it with meta-no index tags in the header of every page on the dev site.
- you'd need to remove the 301 redirects
- since the search engines consider subdomains completely separate sites, you'd need to set up and verify the dev subdomain as a separate site in both Google and Bing webmaster tools in order for the URL Removal Tool to work.
I've never actually used the URL Removal tool on a full subdomain before, but see no reason why it wouldn't work as expected. You could actually test it out first on your dev.birdybanners.co.uk/ site as it has the same problem of the dev site being indexed in the SERPs.
Hope that helps give you a strategy to resolve the problem? Be sure to holler if you need me to better clarify anything.
Paul
-
Hi Andy,
Thanks for your response.
When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk.
I want to remove a sub domain not a page, are you able to assist?
-
in GWT ensure you have removed the directory / subdomain from listings / index. (under optimisation > remove urls).
May take a week to kick in but if your 301s are working and robots is in place it will work.
In addition to these ensure you are using canonical tags pointing the the live location not dev.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content
I have one client with two domains, identical products to appear on both domains. How should I handle this?
Technical SEO | | Hazel_Key0 -
Email and landing page duplicate content issue?
Hi Mozers, my question is, if there is a web based email that goes to subscribers, then if they click on a link it lands on a Wordpress page with very similar content, will Google penalize us for duplicate content? If so is the best workaround to make the email no index no follow? Thanks!
Technical SEO | | CalamityJane770 -
What to do with old website still online & duplicate content
I launched a new wordpress site at www.cheaptubes.com in Sept. I haven't taken the old one down yet, it is still at http://65.61.43.25/ The reason I left it up is I wanted to make sure everything was properly redirected 1st. Some pages and images are still ranking but most point to the new site. When I search for carbon nanotubes pricelist and look in images I see some of our images on the old site are still ranking there https://www.google.com/imgres?imgurl=http://65.61.43.25/images/single-walled-nanotubes.1.gif&imgrefurl=http://65.61.43.25/ohfunctionalizedcnts.htm&h=359&w=451&tbnid=HKlL84A_9X0jGM:&docid=N2wdCg7rSQBsjM&ei=-A2qVqThL4WxeKCyjdAM&tbm=isch&ved=0ahUKEwikvcWdxczKAhWFGB4KHSBZA8oQMwhJKCIwIg I guess I can put WP on the old server and do some 301s from there but I'm not sure if that is best or if I should just kill it off entirely? My rankings took a hit on Nov 15th and business has been bad ever since so I'm trying to figure this out quickly. Moz.com and onpage.org both say my site has duplicate content on several pages. I've looked at the content and it isn't duplicate. How can I figure this out? Google likely see's it the same way. These aren't duplicate pages, they are different products. I even searched my product pages to make sure I didn't have 2 of each in there and I don't. With Moz its mostly product tags it sees as duplicate but the products are completely different
Technical SEO | | cheaptubes0 -
Issues with Duplicates and AJAX-Loader
Hi, On one website, the "real" content is loaded via AJAX when the visitor clicks on a tile (I'll call a page with some such tiles a tile-page here). A parameter is added to the URL at the that point and the content of that tile is displayed. That content is available via an URL of its own ... which is actually never called. What I want to achieve is a canonicalised tile-page that gets all of the tiles' content and is indexed by google - if possible with also recognising that the single-URLs of a tile are only fallback-solutions and the "tile-page" should be displayed instead. The current tile-page leads to duplicate meta-tags, titles etc and minimal differences between what google considers a page of its own (i.e. the same page with different tiles' contents). Does anybody have an idea on what one can do here?
Technical SEO | | netzkern_AG0 -
Many Errors on E-commerce website mainly Duplicate Content - Advice needed please!
Hi Mozzers, I would need some advice on how to tackle one of my client’s websites. We have just started doing SEO for them and after moz crawled the e-commerce it has detected: 36 329 Errors – 37496 warnings and 2589 Notices all going up! Most of the errors are due to duplicate titles and page content but I cannot identify where the duplicate pages come from, these are the links moz detected of the Duplicate pages (unfortunately I cannot add the website for confidentiality reasons) : • www.thewebsite.com/index.php?dispatch=categories.view&category_id=233&products_per_00&products_per_2&products_per_2&products_per_2&page=2 • www.thewebsite.com/index.php?dispatch=categories.view&category_id=233&products_per_00=&products_per_00&products_per_2&products_per_2&products_per_2&page=2 • www.thewebsite.com/index.php?dispatch=categories.view&category_id=233&products_per_00=&products_per_00&products_per_2&page=2 • www.thewebsite.com/index.php?dispatch=categories.view&category_id=233&products_per_2=&products_per_00&page=2 • www.thewebsite.com/index.php?dispatch=categories.view&category_id=233&products_per_00&products_per_00&products_per_00&products_per_00&page=2 With these URLs it is quite hard to identify which pages need to be canonicalize. And this is jsut an example out of thousands on this website. If anyone would have any advice on how to fix this and how to tackle 37496 errors on a website like this that would be great. Thank you for your time, Lyam
Technical SEO | | AlphaDigital0 -
How different should content be so that it is not considered duplicate?
I am making a 2nd website for the same company. The name of the company, our services, keywords and contact info will show up several times within the text of both websites. The overall text and paragraphs will be different but some info may be repeated on both sites. Should I continue this? What precautions should I take?
Technical SEO | | savva0 -
Rel=canonical overkill on duplicate content?
Our site has many different health centers - many of which contain duplicate content since there is topic crossover between health centers. I am using rel canonical to deal with this. My question is this: Is there a tipping point for duplicate content where Google might begin to penalize a site even if it has the rel canonical tags in place on cloned content? As an extreme example, a site could have 10 pieces of original content, but could then clone and organize this content in 5 different directories across the site each with a new url. This would ultimately result in the site having more "cloned" content than original content. Is this at all problematic even if the rel canonical is in place on all cloned content? Thanks in advance for any replies. Eric
Technical SEO | | Eric_Lifescript0 -
A problem with duplicate content
I'm kind of new at this. My crawl anaylsis says that I have a problem with duplicate content. I set the site up so that web sections appear in a folder with an index page as a landing page for that section. The URL would look like: www.myweb.com/section/index.php The crawl analysis says that both that URL and its root: www.myweb.com/section/ have been indexed. So I appear to have a situation where the page has been indexed twice and is a duplicate of itself. What can I do to remedy this? And, what steps should i take to get the pages re-indexed so that this type of duplication is avoided? I hope this makes sense! Any help gratefully received. Iain
Technical SEO | | iain0