Best way to block a sub-domain from being indexed
-
Hello,
The search engines have indexed a sub-domain I did not want indexed its on
old.domain.com and dev.domain.com - I was going to password them but is there a best practice way to block them.
My main domain default robots.txt says :-
Sitemap: http://www.domain.com/sitemap.xml
global
User-agent: *
Disallow: /cgi-bin/
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Disallow: /trackback/
Disallow: /feed/
Disallow: /comments/
Disallow: /category//
Disallow: */trackback/
Disallow: */feed/
Disallow: /comments/
Disallow: /? -
Hi,
CleverPhD has some interesting ideas with robots.txt and Google Webmaster Tools, but simply password protecting all dev pages should keep pages out of Google's index. There's no best practice here, since a password wall will keep Googlebot out on its own.
To be doubly safe, you can also include a meta noindex tag on dev pages.
Keep in mind that once a page is in Google's index, it's going to take awhile for it to leave (unless you use CleverPhD's method). But, having a blank page in Google's index really isn't all that bad. It's there, but it won't rank for much.
Hope this helps,
Kristina
-
I've never tried a method like this - FreshFireOne, did you?
-
First and foremost when you finish all this - password protect your dev instances. A url will leak out eventually and then this happens. I know it is a PIA, but it is worth it.
To remove subdomains. Go into GWT and register the subdomains as separate websites in GWT. Create a robots.txt for each subdomain (not the one you mention, you need a robots that is specific to that subdomain that disallows all files. If you cant do that, have your subdomains include a noindex meta tag on all pages. You have to be careful with this as you do not want to push out your dev. robots.txt or the noindex meta tags to your production server, but it can be done. Talk to your devs. Then go into GWT and use the URL removal tool. Just leave it blank and it will remove the whole site.
Poof. Gone. You can then watch the GWT accounts. They will show errors for the dev site like "Severe health issues are found on your site - Some important page has been removed by request." This is a good error as it confirms that that subdomain is removed.
We actually used this not on a dev site but on our www1 server that was indexed. We use a load balancer with multiple copies of the site. www1 was completing with www. Using this above did the trick.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Best to Handle Inherited 404s on Purchased Domain
We purchased a domain from another company and migrated our site over to it very successfully. However, we have one artifact of the original domain in that there was a page that was exploited by other sites on the web. This page allowed you to pass any URL to it and redirect to that URL (e.g. http://example.com/go/to/offsite_link.asp?GoURL=http://badactor.com/explicit_content). This page does not exist on our site so the results always go to a 404 on our site. However, we find that crawlers are still attempting to access these invalid pages. We have disavowed as many of the explicit sites as we can, but still some crawlers come looking for those links. We are considering blocking the redirect page in our robots.txt but we are concerned that the links will remain indexed but uncrawlable. What's the best way to pull these pages from search engines and never have them crawled again? UPDATE: Clarifying that what we're trying to do it get search engines to just never try to get to these pages. We feel the fact they're even wasting their time on getting a 404 is what we're trying to avoid. Is there any reason we shouldn't just block these in our robots.txt?
Intermediate & Advanced SEO | | russell_ms1 -
Block web archieve/way back machine
Hi i want to block web archive/wayback machine from indexing my site and creating a record of it on their database. Any ideas on how to do this? Cheers,
Intermediate & Advanced SEO | | Mikey008
Superpak2 -
Blocking Certain Site Parameters from Google's Index - Please Help
Hello, So we recently used Google Webmaster Tools in an attempt to block certain parameters on our site from showing up in Google's index. One of our site parameters is essentially for user location and accounts for over 500,000 URLs. This parameter does not change page content in any way, and there is no need for Google to index it. We edited the parameter in GWT to tell Google that it does not change site content and to not index it. However, after two weeks, all of these URLs are still definitely getting indexed. Why? Maybe there's something we're missing here. Perhaps there is another way to do this more effectively. Has anyone else ran into this problem? The path we used to implement this action:
Intermediate & Advanced SEO | | Jbake
Google Webmaster Tools > Crawl > URL Parameters Thank you in advance for your help!0 -
301 Pandalized Domain to Authority Domain?
Hello, If I redirect a Panda penalized domain (DA 65, bad link profile) to another authority domain (DA 35, clean link profile), will it still carry a penalty? I've heard cases where a panda penalized domain moved to a brand new domain carried the penalty.
Intermediate & Advanced SEO | | mashoid0 -
Sub Domain
Hi everybody, My competition has started to use the sub-domains vastly. He has created one sub domain for every single city and keyword. Is it something that I should be worried of? Is it a good idea I start doing the same thing? Thanks for your help.
Intermediate & Advanced SEO | | Armin6660 -
Why does a site have no domain authority?
A website was built and launched eight months ago, and their domain authority is 1. When a site has been live for a while and has such a low DA, what's causing it?
Intermediate & Advanced SEO | | optimalwebinc0 -
Freshness Index?
Hi, I've been a member for a few months but this is my first entry. I typically build small portal websites to help attract more customers for small business approx. 5-7 pages and very tightly optimized around one primary keyword and 2 secondaries. These are typically very low competition. I do no link building to speak of. I don't keyword stuff or use poorly written content. I know that may be subjective but I believe the content I am using is genuinely useful to the reader. What I have noticed recently is the sites get ranked quite well to begin with e.g. anywhere from the bottom half of the first page to page 2-3 and they stick for maybe 2-3 weeks, and the client is very happy, they then just vanish. It's not just the Google dance either these sites don't typically come back at all or when they do they are 100+ I was advised this was due to the freshness index but honestly these sites are hardly newsworthy...just wondering if anyone had any ideas? Many thanks in advance.
Intermediate & Advanced SEO | | nichemarkettools0 -
What are the best way to get a new subdomain ranked properly
Our main site (blog with 700 high quality articles) ranks pretty well and we recently launced a rapidly growing forum (55.000 posts in the first 11 weeks) on a subdomain. What would be a good strategy for ranking the forum quickly
Intermediate & Advanced SEO | | xpd1