Dev Site Was Indexed By Google
-
Two of our dev sites(subdomains) were indexed by Google. They have since been made private once we found the problem. Should we take another step to remove the subdomain through robots.txt or just let it ride out?
From what I understand, to remove the subdomain from Google we would verify the subdomain on GWT, then give the subdomain it's own robots.txt and disallow everything.
Any advice is welcome, I just wanted to discuss this before making a decision.
-
We ran into this in the past, and one thing that we (think) happened is that the links to the dev site were sent via email to several gmail accounts. We think this is how Google then indexed the site, as there were no inbound links posted anywhere.
I think that the main issue is how it's perceived by the client, and if they are freaking out about it. In that case, using an access control password to prevent anyone from coming to the site will limit anyone from seeing it.
The robot.txt file should flush it out, but yes, it takes a little bit of time.
-
I've had this happen before. In the dev subdomain, I added a robots.txt that excluded everything, verified the subdomain as its own site in GWT, then asked for that site (dev subdomain) to be removed.
I then went and used a free code monitoring service that checked for code changes of a URL once a day. I set it up to check the live site robots.txt and the robots.txt of all of the dev sites, so I'd know within 24 hours if the developers had tweaked the robots.txt.
-
Hi Tyler,
You definitely don't want to battle yourself for duplicate content. If the current sub-domains have little link juice (in links) to them, I would simply block the domain from being further indexed. If there are a couple pages that are of high value it maybe worth the time to use a 301 redirect to prevent losing any links / juice.
Using robots.txt or noindex / tags may work, but in my personal experience the easiest and most efficient way to block any indexing is simply use .htaccess / .htpasswrd this will prevent anybody without credentials from even viewing your site effectively blocking all spiders / bots and unwanted snoopers.
-
Hey Tyler,
We would follow the same protocol if in your shoes. Remove any instance of the indexed dev subdomain(s), then create your new robot.txts files for each subdomain and disavow any indexed content/links as an extra step. Also, double check and even resubmit your root domain's XML sitemap so Google can reindex your main content/links as a precautionary measure.
PS - We develop on a separate server and domain for any new work for our site or any client sites. Doing this allows us to block Google from everything.
Hope this was helpful! - Patrick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Why My site pages getting video index viewport issue?
Hello, I have been publishing a good number of blogs on my site Flooring Flow. Though, there's been an error of the video viewport on some of my articles. I have tried fixing it but the error is still showing in Google Search Console. Can anyone help me fix it out?
Technical SEO | | mitty270 -
Google + and Google Knoladge Graph
I am trying to get things to match up for the company brand websearch and the Google + page and we have had it for years now The knowledge graph on Google is showing the map, address and name (shown in attached image), but is not linked to a G+ page, as when i click the "Are you the business owner?" its is trying to make me create a new G+ business page. Anyone have any ideas on this? Also does the wiki name have to be exact for it to show? As for phone number would that be coming from the DNS record as that is nowhere in the markup rich snippet or normal markup Thanks in advance LC9cWdG
Technical SEO | | David-McGawn0 -
Can I have an http AND a https site on Google Webmaster tools
My website is https but the default property that was configured on Google WMT was http and wasn't showing me any information because of that. I added an https property for that, but my question is: do I need to delete the original HTTP or can I leave both websites?
Technical SEO | | Onboard.com0 -
Do we need to manually submit a sitemap every time, or can we host it on our site as /sitemap and Google will see & crawl it?
I realized we don't have a sitemap in place, so we're going to get one built. Once we do, I'll submit it manually to Google via Webmaster tools. However, we have a very dynamic site with content constantly being added. Will I need to keep manually re-submitting the sitemap to Google? Or could we have the continually updating sitemap live on our site at /sitemap and the crawlers will just pick it up from there? I noticed this is what SEOmoz does at http://www.seomoz.org/sitemap.
Technical SEO | | askotzko0 -
Site not being Indexed that fast anymore, Is something wrong with this Robots.txt
My wordpress site's robots.txt used to be this: User-agent: * Disallow: Sitemap: http://www.domainame.com/sitemap.xml.gz I also have all in one SEO installed and other than posts, tags are also index,follow on my site. My new posts used to appear on google in seconds after publishing. I changed the robots.txt to following and now post indexing takes hours. Is there something wrong with this robots.txt? User-agent: * Disallow: /cgi-bin Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content/plugins Disallow: /wp-content/cache Disallow: /wp-content/themes Disallow: /wp-login.php Disallow: /wp-login.php Disallow: /trackback Disallow: /feed Disallow: /comments Disallow: /author Disallow: /category Disallow: */trackback Disallow: */feed Disallow: */comments Disallow: /login/ Disallow: /wget/ Disallow: /httpd/ Disallow: /*.php$ Disallow: /? Disallow: /*.js$ Disallow: /*.inc$ Disallow: /*.css$ Disallow: /*.gz$ Disallow: /*.wmv$ Disallow: /*.cgi$ Disallow: /*.xhtml$ Disallow: /? Disallow: /*?Allow: /wp-content/uploads User-agent: TechnoratiBot/8.1 Disallow: ia_archiverUser-agent: ia_archiver Disallow: / disable duggmirror User-agent: duggmirror Disallow: / allow google image bot to search all imagesUser-agent: Googlebot-Image Disallow: /wp-includes/ Allow: /* # allow adsense bot on entire siteUser-agent: Mediapartners-Google* Disallow: Allow: /* Sitemap: http://www.domainname.com/sitemap.xml.gz
Technical SEO | | ideas1230 -
Multiple Domains, Same IP address, redirecting to preferred domain (301) -site is still indexed under wrong domains
Due to acquisitions over time and the merging of many microsites into one major site, we currently have 20+ TLD's pointing to the same IP address as our "preferred domain:" for our consolidated website http://goo.gl/gH33w. They are all set up as 301 redirects on apache - including both the www and non www versions. When we launched this consolidated website, (April 2010) we accidentally left the settings of our site open to accept any of our domains on the same IP. This was later fixed but unfortunately Google indexed our site under multiple of these URL's (ignoring the redirects) using the same content from our main website but swapping out the domain. We added some additional redirects on apache to redirect these individual pages pages indexed under the wrong domain to the same page under our main domain http://goo.gl/gH33w. This seemed to help resolve the issue and moved hundreds of pages off the index. However, in December of 2010 we made significant changes in our external dns for our ip addresses and now since December, we see pages indexed under these redirecting domains on the rise again. If you do a search query of : site:laboratoryid.com you will see a few hundred examples of pages indexed under the wrong domain. When you click on the link, it does redirect to the same page but under the preferred domain. So the redirect is working and has been confirmed as 301. But for some reason Google continues to crawl our site and index under this incorrect domains. Why is this? Is there a setting we are missing? These domain level and page level redirects should be decreasing the pages being indexed under the wrong domain but it appears it is doing the reverse. All of these old domains currently point to our production IP address where are preferred domain is also pointing. Could this be the issue? None of the pages indexed today are from the old version of these sites. They only seem to be the new content from the new site but not under the preferred domain. Any insight would be much appreciated because we have tried many things without success to get this resolved.
Technical SEO | | sboelter0 -
My site went from #3 to #2 to #1 to #10 in Google.nl
I have a site which has been #3 in Google.nl for a certain long tail phrase for pretty long. After I optimized my site some more for this specific phrase I went to #2 and eventually some weeks later to #1. But when I finally became #1, the celebration was short, as two weeks thereafter I dropped to #10 where I have been stuck for several weeks now... Can this be some sort of over optimization penalty? I personally don't think so as I hardly created links with the anchor, or a variation of it. On page also seems pretty much in order to me. Also, the site is on a exact match domain and used to rank for it, but doesn't even rank in the top #100 anymore. Any advice? PS: I rather don't publish the URL, but if anyone can help out I have no problem sending it in a private message, including the phrase in question.
Technical SEO | | VisualSense0