Best way to remove full demo (staging server) website from Google index
-
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains!
I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google:
Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index?
We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it?
I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all.
The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually.
Or would it be better to go down the canonical route?
-
Why couldn't I just put a password on the staging site, and let Google sort out the rest? Just playing devil's advocate.
-
If you've enough time to verify each subdomain in WMT and also removing 400+ domains one by one, then you can go for solution 2. You can't remove subdomain from verified WMT account of main domain, that's why you need to verify each domain.
Adding canonical is a better option, it wouldn't remove all of the demo domains from Google's index rapidly, you have to wait for few months, but you'll be on the safe side.
-
Out of curiosity, why wouldn't you recommend solution 2?
You mentioned that you faced a similar kind of situation in the past, how did that work out? Which of the 3 solutions (or all) did you opt for?
-
Good advice but an IP restriction for the demo sites won't be possible on this occasion as our router throws out a range of different IP addresses and we occasionally need the sites to be viewed externally! Any other suggestions to help?
-
I'd also recommend putting in an IP restriction for any of the demo sites.
So that if anyone visits the demo sites from a non-whitelisted IP address, then you can display an error message, or simply redirect them over to the live site.
That will likely have the search results quickly removed from the search engine.
Hope this helps!
-- Jeff
-
Solution 1:
Add robots.txt on all demo domains and block them, or add noindex in their header.
Solution 2: Verify each domain in webmaster tools and remove it entirely from the link removal section ( I wouldn't recommend this).
Solution 3:
If your both domains like agency1.domain.com and demo.agency1.domain.com have same coding and are clone then you should just add canonical url to the agency1.domain.com and canonical will be http://agency.domain.com/ it will work if it will be automatically shown in the demo domain. if it doesn't show up in the demo domain automatically then add the same canonical to the demo domain.
It will take some time to deindexed from serps, but it will surely work. I've faced the same kind of situation in past.
-
Noindex is your best option, really. It might take weeks, but I don't think any other method is going to be faster. Plus, technically speaking, "noindex" is the proper method for what you want to do - canonical tags or a robots.txt may do the job, but they aren't exactly the right way.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting Google to index our sitemap
Hi, We have a sitemap on AWS that is retrievable via a url that looks like ours http://sitemap.shipindex.org/sitemap.xml. We have notified Google it exists and it found our 700k urls (we are a database of ship citations with unique urls). However, it will not index them. It has been weeks and nothing. The weird part is that it did do some of them before, it said so, about 26k. Then it said 0. Now that I have redone the sitemap, I can't get google to look at it and I have no idea why. This is really important to us, as we want not just general keywords to find our front page, but we also want specific ship names to show links to us in results. Does anyone have any clues as to how to get Google's attention and index our sitemap? Or even just crawl more of our site? It has done 35k pages crawling, but stopped.
Intermediate & Advanced SEO | | shipindex0 -
Why is my website not ranking for it's brand name in SERPs but has been indexed by Google?
The website https://christchurch.crowneplaza.com has been live for a couple of months but is not being found in Google search results - even when searching for it's own brand name 'crowne plaza christchurch.' Google has indexed the site - but we are still not showing - https://www.google.co.nz/search?q=site%3Ahttp%3A%2F%2Fchristchurch.crowneplaza.com&rlz=1C1NHXL_enNZ735NZ735&oq=site%3A&aqs=chrome.0.69i59j69i57j69i58j69i59l2j69i65.896j0j7&sourceid=chrome&ie=UTF-8 Any ideas as to why? I think it may be because their are two versions of the site, http and https, both with their own rel=canonical tags. Could this be the cause? Any help much appreciated.
Intermediate & Advanced SEO | | Timmy30 -
Google Not Indexing App Content
Hello Mozzers I recently noticed that there has been an increase in crawl errors reported in Google Search console & Google has stopped indexing our app content. Could this be due to the fact that there is a mismatch between the host path name mentioned within the android deeplink (within the alternate tag) and the actual URL of the page. For instance on the following desktop page http://www.example.com.au/page-1 the android deeplink points to http://www.example.com.au/android-app://com.example/http/www.example.com.au/4652374 Please note that the content on both pages (desktop & android) is same.Is this is a correct setup or am I doing something wrong here? Any help would be much appreciated. Thank you so much in advance.
Intermediate & Advanced SEO | | InMarketingWeTrust0 -
Google index
Hello, I removed my site from google index From GWT Temporarily remove URLs that you own from search results, Status Removed. site not ranking well in google from last 2 month, Now i have question that what will happen if i reinclude site url after 1 or 2 weeks. Is there any chance to rank well when google re index the site?
Intermediate & Advanced SEO | | Getmp3songspk0 -
Best Way to Incorporate FAQs into Every Page - Duplicate Content?
Hi Mozzers, We want to incorporate a 'Dictionary' of terms onto quite a few pages on our site, similar to an FAQ system. The 'Dictionary' has 285 terms in it, with about 1 sentence of content for each one (approximately 5,000 words total). The content is unique to our site and not keyword stuffed, but I am unsure what Google will think about us having all this shared content on these pages. I have a few ideas about how we can build this, but my higher-ups really want the entire dictionary on every page. Thoughts? Image of what we're thinking here - http://screencast.com/t/GkhOktwC4I Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Is there a way to keep sitemap.xml files from getting indexed?
Wow, I should know the answer to this question. Sitemap.xml files have to be accessible to the bots for indexing they can't be disallowed in robots.txt and can't block the folder at the server level. So how can you allow the bots to crawl these xml pages but have them not show up in google's index when doing a site: command search, or is that even possible? Hmmm
Intermediate & Advanced SEO | | irvingw0 -
How to remove duplicate content, which is still indexed, but not linked to anymore?
Dear community A bug in the tool, which we use to create search-engine-friendly URLs (sh404sef) changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. <code>Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake)</code> After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David
Intermediate & Advanced SEO | | rmvw0 -
Removing large section of content with traffic, what is best de-indexing option?
If we are removing 100 old urls (archives of authors that no longer write for us), what is the best option? we could 301 traffic to the main directory de-index using no-index, follow 404 the pages Thanks!
Intermediate & Advanced SEO | | nicole.healthline0