Removing a site from Google's index
-
We have a site we'd like to have pulled from Google's index. Back in late June, we disallowed robot access to the site through the robots.txt file and added a robots meta tag with "no index,no follow" commands. The expectation was that Google would eventually crawl the site and remove it from the index in response to those tags. The problem is that Google hasn't come back to crawl the site since late May. Is there a way to speed up this process and communicate to Google that we want the entire site out of the index, or do we just have to wait until it's eventually crawled again?
-
ok. Not abundantly clear upon first reading. Thank you for your help.
-
Thank you for pointing that out Arlene. I do see it now.
The statement before that line is of key importance for an accurate quote. "If you own the site, you can verify your ownership in Webmaster Tools and use the verified URL removal tool to remove an entire directory from Google's search results."
It could be worded better but what they are saying is AFTER your site has already been removed from Google's index via the URL removal tool THEN you can block it with robots.txt. The URL removal tool will remove the pages and keep them out of the index for 90 days. That's when changing the robots.txt file can help.
-
"Note: To ensure your directory or site is permanently removed, you should use robots.txt to block crawler access to the directory (or, if you’re removing a site, to your whole site)."
The above is a quote from the page. You have to expand the section I referenced in my last comment. Just re-posting google's own words.
-
I thought you were offering a quote from the page. It seems that is your summarization. I apologize for my misunderstanding.
I can see how you can make that conclusion but it not accurate. Robots.txt does not ensure a page wont get indexed. I always recommend use of the noindex tag which should be 100% effective for the major search engines.
-
Go here: http://www.google.com/support/webmasters/bin/answer.py?answer=164734
Then expand the option down below that says: "<a class="zippy zippy-track zippy-collapse" name="RemoveDirectory">I want to remove an entire site or the contents of a directory from search results"</a>
They basically instruct you to block all robots in the robots.txt file, then request removal of your site. Once it's removed, the robots file will keep it from getting back into the index. They also recommend putting a "noindex" meta tag on each page to ensure nothing will get picked up. I think we have it taken care of at this point. We'll see
-
Arlene, I checked the link you offered but I could not locate the quote you offered anywhere on the page. I am sure it is referring to a different context. Using robots.txt as a blocking tool is fine BEFORE a site or page is indexed, but not after.
-
I used the removal tool and just entered a "/" which put in a request to have everything in all of my site's directories pulled from the index. And I have left "noindex" tags in place on every page. Hopefully this will get it done.
Thanks for your comments guys!
-
We blocked robots from accessing the site because Google told us to. This is straight from the webmaster tools help section:
Note: To ensure your directory or site is permanently removed, you should use robots.txt to block crawler access to the directory (or, if you’re removing a site, to your whole site).
-
I have webmaster tools setup, but I don't see an option to remove the whole site. There is a URL removal tool, but there are over 700 pages I want pulled out of the index. Is there an option in webmaster tools to have the whole site pulled from the index?
-
Actually, since you have access to the site, you can leave the robots.txt at disallowed -- if you go into Google Webmaster Tools, verify your site, and request removal of your entire site. Let me know if you'd like a link on this with more information. This will involve adding an html file or meta tag to your site to verify you have ownership.
-
Thank you. Didn't realize we were shooting ourselves in the foot.
-
Hi Arlene.
The problem is that when you blocked the site with robots.txt, you are preventing Google from re-crawling your site so they cannot see the noindex tag. If you have properly placed the noindex tag on all the pages in your site, then modify your robots.txt file to allow Google to see your site. Once that happens Google will begin crawling your site and then be able to deindex your pages.
The only other suggestion is to submit a sitemap and/or remove the "nofollow" tag. With the nofollow tag on all your pages, Google may visit your site for a single page at a time since you are telling the crawler not to follow any links it finds. You are blocking it's normal discovery of your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sub domain? Micro site? What's the best solution?
My client currently has two websites to promote their art galleries in different parts of the country. They have bought a new domain (let's call it buyart.com) which they would eventually like to use as an e-commerce platform. They are wondering whether they keep their existing two gallery websites (non e-commerce) separate as they always have been, or somehow combine these into the new domain and have one overarching brand (buyart.com). I've read a bit on subdomains and microsites but am unsure at this stage what the best option would be, and what the pros and cons are. My feeling is to bring it all together under buyart.com so everything is in one place and creates a better user journey for anyone who would like to visit. Thoughts?
Technical SEO | | WhitewallGlasgow0 -
How to check if an individual page is indexed by Google?
So my understanding is that you can use site: [page url without http] to check if a page is indexed by Google, is this 100% reliable though? Just recently Ive worked on a few pages that have not shown up when Ive checked them using site: but they do show up when using info: and also show their cached versions, also the rest of the site and pages above it (the url I was checking was quite deep) are indexed just fine. What does this mean? thank you p.s I do not have WMT or GA access for these sites
Technical SEO | | linklander0 -
Why is Google's cache preview showing different version of webpage (i.e. not displaying content)
My URL is: http://www.fslocal.comRecently, we discovered Google's cached snapshots of our business listings look different from what's displayed to users. The main issue? Our content isn't displayed in cached results (although while the content isn't visible on the front-end of cached pages, the text can be found when you view the page source of that cached result).These listings are structured so everything is coded and contained within 1 page (e.g. http://www.fslocal.com/toronto/auto-vault-canada/). But even though the URL stays the same, we've created separate "pages" of content (e.g. "About," "Additional Info," "Contact," etc.) for each listing, and only 1 "page" of content will ever be displayed to the user at a time. This is controlled by JavaScript and using display:none in CSS. Why do our cached results look different? Why would our content not show up in Google's cache preview, even though the text can be found in the page source? Does it have to do with the way we're using display:none? Are there negative SEO effects with regards to how we're using it (i.e. we're employing it strictly for aesthetics, but is it possible Google thinks we're trying to hide text)? Google's Technical Guidelines recommends against using "fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash." If we were to separate those business listing "pages" into actual separate URLs (e.g. http://www.fslocal.com/toronto/auto-vault-canada/contact/ would be the "Contact" page), and employ static HTML code instead of complicated JavaScript, would that solve the problem? Any insight would be greatly appreciated.Thanks!
Technical SEO | | fslocal0 -
'No Follow' and 'Do Follow' links when using WordPress plugins
Hi all I hope someone can help me out with the following question in regards to 'no follow' and 'do follow' links in combination with WordPress plugins. Some plugins that deal with links i.e. link masking or SEO plugins do give you the option to 'not follow' links. Can someone speak from experience that this does actually work?? It's really quite stupid, but only occurred to me that when using the FireFox add on 'NoDoFollow' as well as looking at the SEOmoz link profile of course, 95% of my links are actually marked as FOLLOW, while the opposite should be the case. For example I mark about 90% of outgoing links as no follow within a link masking plugin. Well, why would WordPress plugins give you the option to mark links as no follow in the first place when they do in fact appear as follow for search engines and SEOmoz? Is this a WordPress thing or whatnot? Maybe they are in fact no follow, and the information supplied by SEO tools comes from the basic HTML structure analysis. I don't know... This really got me worried. Hope someone can shed a light. All the best and many thanks for your answers!
Technical SEO | | Hermski0 -
Should we use & or and in our url's?
Example: /Zambia/kasanka-&-bangweulu or /Zambia/kasanka-and-bangweulu which is the better url from the search engines point of view?
Technical SEO | | tribes0 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
Removing links from another site
Hello, Some site that I have never been able to access as it is always down has over 3,000 links to my website. They disappeared the other week and our search queries dramatically improved but now they are back again in Google Webmaster and we have dropped again.I have contacted the site owner and got no response and I have also put in a removal form (though I am not sure this fits for that) and asked Google to remove as they have been duplicating our content also. It was in my pending section but has now disappeared.This links are really damaging our search and the site isnt even there. Do I have to list all 3,000 links in the link removal to Google or is there another way I can go about telling them the issue.Appreciate any help on this
Technical SEO | | luwhosjack0 -
Just relaunched a website - why did it fall in Google's SERPs?
I work for a marketing agency that just redesigned, rewrote and relaunched a client's website. They used to rank #4 on Google for the company's name (which is a fairly common one, for what it's worth). Now they're at #10 and want to know why. I'd like to explain to them what happened but don't know myself. Can someone explain it to me? And can I tell them if/when their ranking might go back up? In case this matters, I can tell you that it looks like Google hasn't yet crawled the new site. Anyway, thanks in advance for any help you can provide.
Technical SEO | | matt-145670