Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
-
Hi
I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ?
Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ?
In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ?
Cheers
Dan
-
- use robots.txt vs the meta tags - robots.txt is preferred.
-
I'm about to issue these instructions would appreciate it if you could quickly confirm covers your advice correctly and nothing missing:
1) Setup a completely different GWT account unrelated to the main site, so that there is a new GWT account specific to the staging subdomain
2) Add a robots.txt on the staging area subdomain site that disallows all pages and all crawlers OR use the noindex meta tag on all pages. Its obviously very important when you update the main site it DOES NOTinclude or push out these files too (since that would result in main site or pages being de-indexed)3) Request removal of all pages in GWT. Leave the form blank for the page to be removed since this will remove the entire site4) After about 1 month (or you see that the pages are all out of the serps), and google has spidered and seen the robots.txt, then put up a password on the entire staging site.Note:For brand new sites staging areas that don't yet exist or exist but are new and not yet showing up in the index then simply add a password for human access to prevent the above process being required in the future. -
Thanks for clarifying that CleverPHD & thanks again for all your help and great advice
Have a great weekend !!
All Best
Dan
-
That is a completely valid question. This is why setting up the separate GWT account for the dev.domain.ext vs www.domain.ext. When you submit the removal request it will only be in the dev.domain.ext account.
The only thing you want to watch is that if you setup robots.txt in your dev environment you want to make sure that it does not get pushed out to your production server. That is the only gotcha as I see it.
-
thanks !
as er my last question theres no risk of accidentally taking out the main site as part of this process ?
cheers
dan
-
Thanks so much for that great advice
just a bit worried about accidentally getting main site removed by accident, i take it so long as its a brand new GWT account for that specific subdomain then this cant happen ?
Cheers
Dan
-
Here is a Google documentation on how to use the GWT to remove a page/directory/site and then the interaction with robots.txt
http://googlewebmastercentral.blogspot.com/2010/03/url-removal-explained-part-i-urls.html
"In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file."
Side story. I once had a subdomain that I needed to take out, but I could not modify the robots.txt file properly (long story). So, we used the GWT tool and the meta noindex tag. It still worked, but I think that would only be a backup approach to the one suggested by the documentation.
-
Usually, this would be true that you would need to use the noindex tag to get things out of the SERPs and need to leave the robots.txt "open" to the crawlers. But when you are working with the remove URL tool in GWT,they rx that you then put the site in robots.txt to keep them out of it
The removal tool in GWT takes care of Google taking the URLs out and then the robots.txt keeps the bots from coming back. Just a different sequence than if you were to use the noindex meta.
-
If you create the GWT account for the dev site and you submit for removal, GWT requires that you either a) have the site blocked in robots.tx or have a noindex meta tag on the pages. Otherwise they will just crawl you again later and you are back in the index. See my post from earlier.
-
Short answer - no dev sites should be public to start with to anyone (let along Google et alia). The simplest way is to put an htacess password on all your dev sites. You can do a password per person in your company, or just one general one that everyone on the dev team shares.
If you do have a dev site in the Serps, the simplest way to get it out is to setup a GWT account for that subdomain and then e.g. dev.yourdomain.ext and then go into that account and request removal of all pages. You just leave the form blank for the page to be removed and it takes out the whole site. You then need a robots.txt on dev.yourdomain.ext (different from the www. version) that disallows all pages all crawlers - that or use the noindex meta tag on all page.
After about 1 month (or you see that the pages are all out of the serps), then I would put up a password on that entire site and be done with it. Key point, dont put the password up until you let google try to spider and it sees the robots etc.
Also, if you have any other staging sites that are out there like test.yourdomain.ext etc. If they are not indexed, go ahead and put the password up on them to limit your exposure.
Public dev sites are the fastest way to get duplicate content into the index and to jack with the ranking of your current site. It is key that all of them are locked down. If one of your developers say it is no big deal, call BS, it is a big deal and it can cause a big mess.
-
Hey Dan,
In this case, I would not exclude crawling via robots.txt. Perhaps later after you have verified the URLs are out of the index.
Just because Google can't crawl a page, doesn't mean they won't keep it in the index. Excluding crawling will not get a page out of the index.
Add the NOINDEX, FOLLOW tag you listed above and give it some time.
Use GWT if it's urgent or the information is sensitive.
-
Thanks Anthony,
The staging area already exists and is indexable as far as i can tell
So i need to tell developers to exclude crawling via robots.txt, add a no-index tag to head of each page but keep it followed so still crawlable i.e. within the Head section of every page on the dev area
OR alternatively just remove urls from GWT)
If excluding crawling via robots.txt file then why do you need to add a noindex tag to each page too, surely the robots.txt deals with this situation ?
cheers
dan
-
Ideally when creating a new staging area, you'd want to exclude crawling via robots.txt.
Add the NoIndex tag to the head of your pages to get them removed from the SERPs. Make sure the page is still crawlable though, as if you exclude it in robots.txt first and then NoIndex it, Google won't be able to see the new NoIndex tag.
If there are not a lot of pages to remove, you can request page removal within Google Webmaster Tools.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to index e-commerce marketplace product pages
Hello! We are an online marketplace that submitted our sitemap through Google Search Console 2 weeks ago. Although the sitemap has been submitted successfully, out of ~10000 links (we have ~10000 product pages), we only have 25 that have been indexed. I've attached images of the reasons given for not indexing the platform. gsc-dashboard-1 gsc-dashboard-2 How would we go about fixing this?
Technical SEO | | fbcosta0 -
URLs dropping from index (Crawled, currently not indexed)
I've noticed that some of our URLs have recently dropped completely out of Google's index. When carrying out a URL inspection in GSC, it comes up with 'Crawled, currently not indexed'. Strangely, I've also noticed that under referring page it says 'None detected', which is definitely not the case. I wonder if it could be something to do with the following? https://www.seroundtable.com/google-ranking-index-drop-30192.html - It seems to be a bug affecting quite a few people. Here are a few examples of the URLs that have gone missing: https://www.ihasco.co.uk/courses/detail/sexual-harassment-awareness-training https://www.ihasco.co.uk/courses/detail/conflict-resolution-training https://www.ihasco.co.uk/courses/detail/prevent-duty-training Any help here would be massively appreciated!
Technical SEO | | iHasco0 -
No index on subdomains
Hi, We have a subdomain that is appearing in the search results - I want to hide this as it looks really bad. If I were to add the no index tag to the sub domain would URL would this affect the whole domain or just that sub domain? The main domain is vitally important - it is just that sub domain I need to hide. Many thanks
Technical SEO | | Creditsafe0 -
How to Delete the slug /category/ from wordpress category pages
Hi all, I would like to ask you what's the better way to eliminate the slug /category/ form the wordpress category pages. I need to delete the slug /category/ to make the url seo frendly. The problem is that my site is an old site with the page indexed by Google for a long time. Thanks for your advice.
Technical SEO | | salvyy0 -
Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
Dear all, starting with my .htaccess file: RewriteEngine On
Technical SEO | | inlinear
RewriteCond %{HTTP_HOST} ^www.inlinear.com$ [NC]
RewriteRule ^(.*)$ http://inlinear.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://inlinear.com/ [R=301,L] 1. I redirect all URL-requests with www. to the non www-version...
2. all requests with "index.html" will be redirected to "domain.com/" My questions are: A) When linking from a page to my frontpage (home) the best practice is?: "http://domain.com/" the best and NOT: "http://domain.com/index.php" B) When linking to the index of a subfolder "http://domain.com/products/index.php" I should link also to: "http://domain.com/products/" and not put also the index.php..., right? C) When I define the canonical ULR, should I also define it just: "http://domain.com/products/" or in this case I should link to the definite file: "http://domain.com/products**/index.php**" Is A) B) the best practice? and C) ? Thanks for all replies! 🙂
Holger0 -
/~username
Hello, The utility on this site that crawls your site and highlights what it sees as potential problems reported an issue with /~username access seeing it as duplicate content i.e. mydomain.com/file.htm is the same as mydomain.com~/username/file.htm so I went to my server hosts and they disabled it using mod_userdir but GWT now gives loads of 404 errors. Have I gone about this the wrong way or was it not really a problem in the first place or have I fixed something that wasn't broken and made things worse? Thanks, Ian
Technical SEO | | jwdl0 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like: staging.domain.com
Technical SEO | | fthead9
User-agent: *
Disallow: / in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.0