Development site is live (and has indexed) alongside live site - what's the best course of action?
-
Hello Mozzers,
I am undertaking a site audit and have just noticed that the developer has left the development site up and it has indexed. They 301d from pages on old site to equivalent pages on new site but seem to have allowed the development site to index, and they haven't switched off the development site. So would the best option be to redirect the development site pages to the homepage of the new site (there is no PR on dev site and there are no links incoming to dev site, so nothing much to lose...)? Or should I request equivalent to equivalent page redirection?
Alternatively I can simply ask for the dev site to be switched off and the URLs removed via WMT, I guess...
Thanks in advance for your help!
-
Very pleased to have been of assistance
heres links to older threads where i asked similar before, for further verification and credit to those that originally helped me:
-
Thanks Amelia - yes you're definitely on the right lines - Dan's response below is v helpful too, that's for sure. I do struggle with developers from time to time, so teaching myself coding and so on via codeacademy, etc. - learnt at uni many years ago but v out of date! Will come in useful for SEO too.
-
Many thanks Dan - much appreciated - that process there makes perfect sense even though in my case too :)))) I will report back on progress in a month or so...
-
Yes a great answer there from Dan - and thanks for your useful input - good point re: not relying on robots.txt alone!
-
Thanks Robert and for the extra comments too !
I cant remember which Mozzer helped me with the above in the first place who should be credited but ill track down the original thread and add it to this post since also contains further info and discussion
All Best
Dan
-
Dan,
This is a very good answer. Just to emphasize, probably the most important piece with a "dev" site is the last one Dan mentions: Password protection. Once you clean up the issue, add it then you should not have the issue going forward.
Even with robots.txt on our dev sites and our design studio, we have had pages end up on the SERPS. Because of the DA of our design studio (where clients go to approve a comp, etc.) we recently had a new political client's comp ranking for a search term on page one. (Ahead of their actual site (we were building another to replace it). So, even with robots.txt, there is still no guarantee it will not be crawled.
Adding password protection will assist in that.Lastly, if you have someone building you a site, and they say they do not want to take down the dev version after your launch, tell them you do not wish to pay them. It will go down. That is unreasonable. I cannot think of a reason to keep the dev version live once the client site launches.
Again, good job Dan.
-
Hi
I'm in a similarish situation with a clients site.
Their situation is that the dev site is on a subdomain i.e. staging.domain.com and they want to keep the staging area active for demonstrating future development work, so situation may be slightly different from yours.
They have now blocked via robot.txt but that's like shutting the stable door after the horse has already bolted.
I asked Moz Q&A a few months ago and got the below answer from a few very helpful and wize Mozzers
-
Setup a completely different Webmaster Tools account unrelated to the main site, so that there
is a new W.T account specific to the staging area sub-domain -
Add a robots.txt on the staging area sub domain site that disallows all pages and all crawlers
OR use the no-index meta tag on all pages but Google much prefers Robots.txt usage for this
Note: Its very important when you update the main site it does not include or push out these files and
instructions too (since that would result in main site being de-indexed)-
Request removal of all pages in GWT. Leave the form field for the page to be removed blank,
since will remove all subdomain pages -
After about 1 month OR you see that the pages are all out of the Search Engine listings (SERPS),
and Google has spidered and seen the robots.txt, then put up a password on the entire staging
site.
Hope that helps
All Best
Dan
-
-
Hi Luke,
I'm interested in other responses to this question...
If I was in your position after seriously berating the dev I would make sure you disallow the dev site in your robots.txt and use webmaster tools to remove the URLs from the index. Then I would password protect the dev site so the search engines couldn't get there even if they try.
Like I say, I'm interested in other responses! This is what I would do, but I don't really know if it's definitely the right thing to do. Does anyone else have anything to add?
Best of luck - its crappy when someone else's error cocks up your work: when our site launched for the first time our IT department screwed up on a monumental scale by getting the DNS settings wrong.
Amelia
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301ing one site's links to another
Hi, I have one site with a well-established link profile, but no actual reason to exist (site A). I have another site that could use a better link profile (site B). In your experience, would 301 forwarding all of site A's pages to site B do anything positive for the link profile/organic search of the site B? Site A is about boating at a specific lake. Site B is about travel destinations across the U.S. Thanks! Best... Michael
Intermediate & Advanced SEO | | 945010 -
Top-10 ranked site dropping in/out of Google index?
I work for a company that makes an important product in a category. The company has a website (www.company.org); the product is at www.company.org/product. We recently (early May) redesigned and rearchitected the product site for SEO purposes. The company site talks about the category a bit (imagine the Colgate site; it talks about "toothpaste" a bit). The blog (blog.company.org/product) also talks about the category quite a bit (and links to the company site of course). The product is a major product in the category, among the top 3. The site and blog have been around for 15+ years. The site has appx. a billion backlinks, most branded links to the product. It's in the top 50 highest ranked sites among all sites on the internet in the ahrefs rank index. Imagine you are searching for our product category, "category". If you search for "category" in Bing today, my company's site is the 3rd result, and it's the 1st result from a company that makes a product in this category. If you search for "category" in Google today, our site is not in the top 150 results. In fact, the site keeps dropping out of Google's index. (See attached for what that looks like in the search console.) What might cause a site to jump from "ranked in top 10" to "not ranked" in Google -- back and forth every couple of days? Penalties? Our recent (early May) site rearchitecture? We're not making giant, index-shifting changes every day. wE0Bn
Intermediate & Advanced SEO | | hoosteeno0 -
What's the best way to deal with deleted .php files showing as 404s in WMT?
Disclaimer: I am not a developer During a recent site migration I have seen a bit of an increase in WMT of 404 errors on pages ending .php. Click on the link in WMT and it just shows as File Not Found - no 404 page. There are about 20 in total showing in webmaster tools and I want to advise the IT department what to do. What is the best way to deal with this for on-page best practice? Thanks
Intermediate & Advanced SEO | | Blaze-Communication0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Getting out of Google's Penguin
Hi all, my site www.uniggardin.dk has lost major rankings on the searchengine google.dk. Went from rank #2-3 on important keywords to my site, and after the latest update most of my rankings have jumped to #12 - #20. This is so annoying, and I really have no idea what to do. Can it cause bad links to my site? In that case what will I have to do? Thanks in advance,
Intermediate & Advanced SEO | | Xpeztumdk
Christoffer0 -
Best approach for a client with another site for the same company
I have a client who has an old website and company A handles the SEO campaign for this site.
Intermediate & Advanced SEO | | ao500000
My client wanted us to create a new website with unique content for the same company aiming to double his chances of ranking on the 1st of SERP's and eventually dominating it.
So we created the new site for him and handled it's SEO campaign. So far we are ranking decently on the search engines but we feel like we could do better. The site we are optimizing for him uses the same company, tracking number and a virtual address in the same city.
Do you think Google has a problem with this set up?
We have listed the new site in the citation directories but I'm worried that we are sending google mixed signals. The company has two listing on each directories, one for the old site and another for the new site.
Another thing, Google+ Local for the new site is created and verified but is not showing up in local pack.
What is the best way to approach this mess?
We are looking into ranking for both local & organic results.0 -
Why are our sites top landing pages URL's that no longer exist and retrun 404 errors?
Digging through analytics today an noticed that our sites top landing pages are for pages that were part of the old www.towelsrus.co.uk website taken down almost 12 months ago. All these pages had the 301 re-directs which were removed a few months back but still have not dropped out of Googles crawl error logs. I can't understand why this is happening but almost certainly the bounce rate on these pages (100%) mean we are loosing potential conversions. How can I identify what keywords and links people are using to land on these pages?
Intermediate & Advanced SEO | | Towelsrus0 -
Need to duplicate the index for Google in a way that's correct
Usually duplicated content is a brief to fix. I find myself in a little predicament: I have a network of career oriented websites in several countries. the problem is that for each country we use a "master" site that aggregates all ads working as a portal. The smaller nisched sites have some of the same info as the "master" sites since it is relevant for that site. The "master" sites have naturally gained the index for the majority of these ads. So the main issue is how to maintain the ads on the master sites and still make the nische sites content become indexed in a way that doesn't break Google guide lines. I can of course fix this in various ways ranging from iframes(no index though) and bullet listing and small adjustments to the headers and titles on the content on the nisched sites, but it feels like I'm cheating if I'm going down that path. So the question is: Have someone else stumbled upon a similar problem? If so...? How did you fix it.
Intermediate & Advanced SEO | | Gustav-Northclick0