My beta site (beta.website.com) has been inadvertently indexed. Its cached pages are taking traffic away from our real website (website.com). Should I just "NO INDEX" the entire beta site and if so, what's the best way to do this? Please advise.
-
My beta site (beta.website.com) has been inadvertently indexed. Its cached pages are taking traffic away from our real website (website.com). Should I just "NO INDEX" the entire beta site and if so, what's the best way to do this? Are there any other precautions I should be taking? Please advise.
-
On your beta sites in future, I would recommend using Basic HTTP Authentication so that spiders can't even access it (this is for Apache):
AuthUserFile /var/www/sites/passwdfile
AuthName "Beta Realm"
AuthType Basic
require valid-user
Then htpasswd -m /var/www/sites/passwdfile usernameIf you do this as well, Google's Removal Tool will go "ok its not there I should remove the page" as well, because they usually ask for content in the page as a check for removal. If you don't remove the text, they MAY not process the removal request (even if it has noindex [though I don't know if that's the case]).
-
-
In Webmaster Tools, set the subdomain up as its own site and verify it
-
Put on the robots.txt for the subdomain (beta.website.com/robots.txt
User-agent: *
Disallow: / -
You can then submit this site for removal in Google Webmaster Tools
- Click "optimization" and then "remove URLs"
- Click "create a new removal request"
- Type the URL "http://beta.website.com/" in there
- Click "continue"
- Click "submit request".
-
-
Agreed on all counts with Mark. In addition, if you haven't done this already, make sure you have canonical tags in place on your pages. Good luck!
-
You can add noindex to the whole subdomain, and then wait for the crawlers to remove it.
Or you can register the subdomain with webmaster tools, block the subdomain via the robots.txt with a general Disallow: / for the entire subdomain, and then use the URL removal tool in Webmaster Tools to remove the subdomain via robots.txt. Just a robots.txt block won't work - it won't remove the pages, it'll just prevent them from being crawled again.
In your case, I would probably go the route of the robots.txt / url removal tool. This will work to remove the pages from Google. Once this has happened, I would use the noindex tag on the whole subdomain and remove the robots.txt block - this way, all search engines should not index the page / will remove it from their index.
Mark
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is this a true rel=nofollow for the whole article? "printfriendly.com" is part of the URL which is why I'm confused.
Is the rel=nofollow tag on this article a true NoFollow for the whole article (and all the external links to other sites in the article), or is it just for a specific part of the page? Here is the article: https://www.aplaceformom.com/blog/americans-are-not-ready-for-retirement/ The reason I ask is that I'm confused about the code since it has "printfriendly.com..." as a portion of the URL. Your help is greatly appreciated. Thanks!
Technical SEO | | dklarse0 -
'duplicate content' on several different pages
Hi, I've a website with 6 pages identified as 'duplicate content' because they are very similar. This pages looks similar because are the same but it show some pictures, a few, about the product category that's why every page look alike each to each other but they are not 'exactly' the same. So, it's any way to indicate to Google that the content is not duplicated? I guess it's been marked as duplicate because the code is 90% or more the same on 6 pages. I've been reviewing the 'canonical' method but I think is not appropriated here as the content is not the same. Any advice (that is not add more content)?
Technical SEO | | jcobo0 -
"Yet-to-be-translated" Duplicate Content: is rel='canonical' the answer?
Hi All, We have a partially internationalized site, some pages are translated while others have yet to be translated. Right now, when a page has not yet been translated we add an English-language page at the url https://our-website/:language/page-name and add a bar for users to the top of the page that simply says "Sorry, this page has not yet been translated". This is best for our users, but unfortunately it creates duplicate content, as we re-publish our English-language content a second time under a different url. When we have untranslated (i.e. duplicate) content I believe the best thing we can do is add which points to the English page. However here's my concern: someday we _will_translate/localize these pages, and therefore someday these links will _not _have duplicate content. I'm concerned that a long time of having rel='canonical' on these urls, if we suddenly change this, that these "recently translated, no longer pointing to cannonical='english' pages" will not be indexed properly. Is this a valid concern?
Technical SEO | | VectrLabs0 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | | the-gate-films0 -
Anything I'm missing as my page just donst seem to rank
I am wandering if anyone can offer any suggestions, we have a page on our site https://www.wilsonfield.co.uk/insolvency-advice/liquidation/ this page is optimised to rank for liquidation however no matter how many links or how optimised the page is it just will not show in the SERPS. Moz gives it a page score of A we have built relevant links directly to the page using appropriate anchor text, have social likes and concentrated of getting more google+ likes. We run a detailed Moz SERP report comparing the above url to the top 10 ranked pages and we are looking competitive if not better on all ranking factors. This is now really frustrating that we arnt even in the top 100 and cant understand why. we have the https version of the site also submitted to webmaster tools and www is set to be the prefered. Has anyone got any ideas as to why google just dosnt like our site, we have no crawl errors we use all best practices.
Technical SEO | | Wilson_Field0 -
Choosing the right page for rel="canonical"
I am wondering how you would choose which page to use as a canonical ? All our articles sit in an article section and they are called in the url when linked from a particular category. Since some articles are in many categories, we may have several links for the same page. My first idea was to put the one in the article category as the canonical, but I wonder if Google will lose the context of the page for it's ranking because it will not be in the proper category. For exemple, this page in the article section : http://www.bdc.ca/en/advice_centre/articles/Pages/exporting_entering.aspx Same page in the Expand Your Sales > Going Global section : http://www.bdc.ca/EN/advice_centre/expand_your_sales/going_global_or_international_markets/Pages/RelatedArticles.aspx?PATH=/EN/advice_centre/articles/Pages/exporting_entering.aspx The second one has much more context related to it, like the breadcrumb is showing the path and the left menu is open at the right place. For this example, I would choose te second one, but some articles may be found in 2 or 3 categories. If you could share your lights on this it would be very appreciated ! Thanks
Technical SEO | | jfmonfette0 -
Google caching the "cookie law message"
Hello! So i've been looking at the cached text version of our website. (Google Eyes is a great add on for this) One thing I've noticed is that, Google caches our EU Cookie Law message. The message appears on the top of the page and Google is caching this. The message is enclosed within and but it still is being cached. I'm going to ask the development mean to move the message at the bottom of the page and fix the position, but reviewing other websites with cookie messages, Google isn't caching them in their text only versions. Any tips or advice?
Technical SEO | | Bio-RadAbs0 -
ECommerce Site, URL's, Canonical and Tracking Referral Traffic
I'm very, very new to eCommerce websites that employ many different URL's to track referral traffic. I have a client that has 18 different URL's that land on the Home Page in order to track traffic from different referral sources. For example: http://erasedisease.com/?ref=abot - Tracks traffic from an affiliate source http://erasedisease.com/?ref=FB01 - Tracks traffic from a FB Ad http://erasedisease.com/?ref=sas&SSAID=289169 - Tracks more affiliate traffic ...and the list goes on and on. My first question is do you think this could hinder our Google rankings? SEOMoz Crawl doesn't show any Duplicate Content Errors, so I guess that's good. I've just been reading a lot about Canonical Url's and eCommerce sites, but I'm not sure if this is a situation where I'd want to use some kind of canonical plugin for this Wordpress website or not. Any advice would be greatly appreciated. Thanks so much!!
Technical SEO | | Linwright0