Duplicate content from development website
-
Hi all - I've been trawling for duplicate content and then I stumbled across a development URL, set up by a previous web developer, which nearly mirrors current site (few content and structure changes since then, but otherwise it's all virtually the same). The developer didn't take it down when the site was launched.
I'm guessing the best thing to do is tell him to take down the development URL (which is specific to the pizza joint btw, immediately. Is there anything else I should ask him to do?
Thanks, Luke
-
Well when I did it I put one removal request in for the whole domain and also put a disallow in the robots.txt for the whole site. Matt appears to be referring to putting in to many removal requests, but if you want your whole site removing you only need one so this wouldn't be an issue - you put your domain URL in. When you say your page has no snippet have you checked what your meta description is as this can help influence your snippet text. I would work at getting your development site removed a.s.a.p and then seeing what happens with your snippet - I think that there is a good chance it could be down to duplicate content issues. Have you checked what the cache for your homepage is in Googles results?
-
Hello Max!
Thank you very much for your answer!
First of all... no, i didn't have analytics or webmaster tools on the development site, i just set up google webmaster tools yesterday to put the removal request. There are ~1800 pages from the dev site indexed and i was removing them one by one when i found this artlicle bu Matt Cutts so i stopped removing:
http://www.mattcutts.com/blog/overdoing-url-removals/
Do you think it would be a good idea to keep doing it?
As far as i have seen, the development site is not outranking the main site but my concern is that the main site home page is showing up in SERP with no snippet so i'm wondering if it´s related somehow with the duplicated content issue.
Regarding your suggestion, DEFINITELLY... that's the type of things that you assume the development company would take care of... I already asked them to add HTTP authentication to the development site!
I really hope Google gets the change soon!
Thank you very much for your help, i really appreciate it!
Un abrazo
-
Hi Max
A couple of questions to understand your situation better - do you have both Google Analytics and Google Webmaster Tools installed on your development site? Is your development site out ranking your main site for any of your key terms?
In my experience unless your development site is out ranking your main site I would add a robots.txt file to disallow all bot access and then I would also put in a removal request for your domain on Google Webmaster Tools. I found this fix very quick - within a matter of days everything was fixed.
However if you feel that you are getting traffic to your development site and it is out ranking your main site, so you have decided that the rel canonical option is best I would still remove your development site when rankings swap around (as Marie pointed out this took a week or so for her).
In regards to your development site I would always aim to have it removed from the index and when you have your issues sorted I would place a password on the whole site so that nobody can access it other than you or someone that has the password. This will allow you to use your development site to its full potential and not have to worry about competitors that have found the URL monitoring your development site even when it is de-indexed!
BTW when I had this issue I had several thousand pages indexed in Google from my development site. Unfortunately I can't give you an exact time as to how long it will take to fix this issue as it all depends on the current crawl rates to your sites.
Hope this helps!
-
I'm having a very similar problem... the development site got crawled and it has 1700+ pages indexed in Google. I'm working to redirect every page from the development site to its equivalent in the production site.
There's something else that i don't understand... the home page of the production site is not showing any snippet in SERPs.. do you think this can be caused by the duplication issue with the development site?
After redirecting from development to production, how long do you think it will take google to reindex everything and understand that there's no duplicated content anymore?
I would really appreciate your opinions!
Un abrazo
-
Thanks so much Matt, Kerie & Marie - brilliant advice there - really brilliant. With your help it's all removed now.
Blimey, that discovery sure set my heart racing (eeeek.)
-
Thanks Keri, great advice on the use of a code monitor - I have known the situation to occur where code changes have been made to development sites and the robots.txt has been changed or removed by mistake causing the development site to be indexed again. Monitoring this would have helped react to this situation so much quicker!
-
I had a similar situation where I had developed a site for a landscaping client. I thought I had gotten rid of the files but somehow Google found them. My development site ranked #1 for his terms and his site was on something like page 6 because it was duplicate content. Here's what I did:
I didn't want to take down my site right away because his company was ranking #1 for his keywords. (Even though they landed on the development site they still had his phone number to call.)
I added a rel canonical to the development site that told Google that the correct site to index was actually the client's site.
Within a week or so, the proper site was ranking #1. At that point I deleted the files for the development site.
-
Excellent advice here. If it's on a subdomain, the subdomain can be claimed in GWT as its own site. You can put a robots.txt on the subdomain then request the entire subdomain be removed from the index.
You may want to go one step further and use something like PolePosition's Code Monitor that checks the code of any page once per day and alerts you if there's a change. In a similar situation, I had it monitor the robots.txt for the live and all development sites for where I was working, so I knew if the developers changed something and could react quickly.
-
Hi Luke
I had the same problem and this is how I fixed it - I registered the development domain with GWT and then put in a removal request. I also got our developers to setup a robot.txt file to tell search engines not to index any of the site - the contents of the robots.txt file are as follows:
User-agent: * Disallow: /
This soon fixed the issue for us. Hope this helps - obviously you don't need the robots.txt if your are just going to take the site down completely as there will be no worry of people finding it in search engines and mistaking it for your live site or search engines finding duplicate content. I used this strategy as we still use the development site for testing etc before going live.
Can I just check is the URL on a separate domain? If it isn't and it is part of your existing domain for instance you can still block that URL using either a robots.txt file or a no index, no follow meta tag. You can also request removal of specific URL's within a site in GWT.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal Duplicate Content Question...
We are looking for an internal duplicate content checker that is capable of crawling a site that has over 300,000 pages. We have looked over Moz's duplicate content tool and it seems like it is somewhat limited in how deep it crawls. Are there any suggestions on the best "internal" duplicate content checker that crawls deep in a site?
Intermediate & Advanced SEO | | tdawson091 -
URL Capitalization Inconsistencies Registering Duplicate Content Crawl Errors
Hello, I have a very large website that has a good amount of "Duplicate Content" issues according to MOZ. In reality though, it is not a problem with duplicate content, but rather a problem with URLs. For example: http://acme.com/product/features and http://acme.com/Product/Features both land on the same page, but MOZ is seeing them as separate pages, therefor assuming they are duplicates. We have recently implemented a solution to automatically de-captialize all characters in the URL, so when you type acme.com/Products, the URL will automatically change to acme.com/products – but MOZ continues to flag multiple "Duplicate Content" issues. I noticed that many of the links on the website still have the uppercase letters in the URL even though when clicked, the URL changes to all lower case. Could this be causing the issue? What is the best way to remove the "Duplicate Content" issues that are not actually duplicate content?
Intermediate & Advanced SEO | | Scratch_MM0 -
Scraping / Duplicate Content Question
Hi All, I understanding the way to protect content such as a feature rich article is to create authorship by linking to your Google+ account. My Question
Intermediate & Advanced SEO | | Mark_Ch
You have created a webpage that is informative but not worthy to be an article, hence no need create authorship in Google+
If a competitor comes along and steals this content word for word, something similar, creates their own Google+ page, can you be penalised? Is there any way to protect yourself without authorship and Google+? Regards Mark0 -
Is all duplication of HTML title content bad?
In light of Hummingbird and that HTML titles are the main selling point in SERPs, is my approach to keyword rich HTML titles bad? Where possible I try to include the top key phrase to descripe a page and then a second top keyphrase describing what the company/ site as a whole is or does. For instance an estate agents site could consist of HTML title such as this Buy Commercial Property in Birmingham| Commercial Estate Agents Birmingham Commercial Property Tips | Commercial Estate Agents In order to preserve valuable characters I have also been omitting brand names other than on the home page... is this also poor form?
Intermediate & Advanced SEO | | SoundinTheory0 -
How to Remove Joomla Canonical and Duplicate Page Content
I've attempted to follow advice from the Q&A section. Currently on the site www.cherrycreekspine.com, I've edited the .htaccess file to help with 301s - all pages redirect to www.cherrycreekspine.com. Secondly, I'd added the canonical statement in the header of the web pages. I have cut the Duplicate Page Content in half ... now I have a remaining 40 pages to fix up. This is my practice site to try and understand what SEOmoz can do for me. I've looked at some of your videos on Youtube ... I feel like I'm scrambling around to the Q&A and the internet to understand this product. I'm reading the beginners guide.... any other resources would be helpful.
Intermediate & Advanced SEO | | deskstudio0 -
Is SEOmoz.org creating duplicate content with their CDN subdomain?
Example URL: http://cdn.seomoz.org/q/help-with-getting-no-conversions Canonical is a RELATIVE link, should be an absolute link pointing to main domain: http://www.seomoz.org/q/help-with-getting-no-conversions <link href='[/q/help-with-getting-no-conversions](view-source:http://cdn.seomoz.org/q/help-with-getting-no-conversions)' rel='<a class="attribute-value">canonical</a>' /> 13,400 pages indexed in Google under cdn subdomain go to google > site:http://cdn.seomoz.org https://www.google.com/#hl=en&output=search&sclient=psy-ab&q=site:http%3A%2F%2Fcdn.seomoz.org%2F&oq=site:http%3A%2F%2Fcdn.seomoz.org%2F&gs_l=hp.2...986.6227.0.6258.28.14.0.0.0.5.344.3526.2-10j2.12.0.les%3B..0.0...1c.Uprw7ko7jnU&pbx=1&bav=on.2,or.r_gc.r_pw.r_cp.r_qf.&fp=97577626a0fb6a97&biw=1920&bih=936
Intermediate & Advanced SEO | | irvingw1 -
News sites & Duplicate content
Hi SEOMoz I would like to know, in your opinion and according to 'industry' best practice, how do you get around duplicate content on a news site if all news sites buy their "news" from a central place in the world? Let me give you some more insight to what I am talking about. My client has a website that is purely focuses on news. Local news in one of the African Countries to be specific. Now, what we noticed the past few months is that the site is not ranking to it's full potential. We investigated, checked our keyword research, our site structure, interlinking, site speed, code to html ratio you name it we checked it. What we did pic up when looking at duplicate content is that the site is flagged by Google as duplicated, BUT so is most of the news sites because they all get their content from the same place. News get sold by big companies in the US (no I'm not from the US so cant say specifically where it is from) and they usually have disclaimers with these content pieces that you can't change the headline and story significantly, so we do have quite a few journalists that rewrites the news stories, they try and keep it as close to the original as possible but they still change it to fit our targeted audience - where my second point comes in. Even though the content has been duplicated, our site is more relevant to what our users are searching for than the bigger news related websites in the world because we do hyper local everything. news, jobs, property etc. All we need to do is get off this duplicate content issue, in general we rewrite the content completely to be unique if a site has duplication problems, but on a media site, im a little bit lost. Because I haven't had something like this before. Would like to hear some thoughts on this. Thanks,
Intermediate & Advanced SEO | | 360eight-SEO
Chris Captivate0 -
Duplicate Content Warning For Pages That Do Not Exist
Hi Guys I am hoping someone can help me out here. I have had a new site built with a unique theme and using wordpress as the CMS. Everything was going fine but after checking webmaster tools today I noticed something that I just cannot get my head around. Basically I am getting warnings of Duplicate page warnings on a couple of things. 1 of which i think i can understand but do not know how to get the warning to go. Firstly I get this warning of duplicate meta desciption url 1: / url 2: /about/who-we-are I understand this as the who-we-are page is set as the homepage through the wordpress reading settings. But is there a way to make the dup meta description warning disappear The second one I am getting is the following: /services/57/ /services/ Both urls lead to the same place although I have never created the services/57/ page the services/57/ page does not show on the xml sitemap but Google obviously see it because it is a warning in webmaster tools. If I press edit on services/57/ page it just goes to edit the /services/ page/ is there a way I can remove the /57/ page safely or a method to ensure Google at least does not see this. Probably a silly question but I cannot find a real comprehensive answer to sorting this. Thanks in advance
Intermediate & Advanced SEO | | southcoasthost0