I accidentally blocked Google with Robots.txt. What next?
-
Last week I uploaded my site and forgot to remove the robots.txt file with this text:
User-agent: * Disallow: /
I dropped from page 11 on my main keywords to past page 50.
I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too.
Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix.
In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more."
How will this affect me long-term? When will I recover my rankings? Is there anything else I can do?
Thanks for your input!
-
Excellent. Good luck on your climb to page 1.
-
RESULTS: So Google re-crawled my site at some point since I fetched it through Webmaster Tools. My page in the search results now has my title and meta description back and I bounced back to page 15. Page 15 isn't too abnormal, as before I accidentally blocked myself I had been bouncing around from page 11 to 18.
Thanks for our input and hopefully this will help someone in the future.
-
-
Thanks for the input. Where do I fetch my URL within GWT? Any ideas on how quickly I'll regain my rankings? Will I have to make up some ground with my SEO work, or will Google just place me back where I was once they re-crawl my site?
-
I agree with Sebastian's follow up suggestions. It's way quicker to de-index yourself than it is to get those rankings back. Even one the robots is "cleared" it can take time to get back to where you were.
-
Hi,
Sorry to hear you're having issues, we've all accidently blocked off our site at one point or another I'm sure.
You've done pretty much everything you can do to get your site re-indexed. The only other thing I could suggest is to do a fetch URL within GWT, then take it a step further by submitting your URL and all linked pages.
If you haven't recovered within a week or so, then it may be an idea to do a reconsideration request.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google News problem
Hello to all. The latest Google algorithm changes have had a big impact on the way that Google news features stories, at least in my country. I've been featured heavily in Google News until about 6th of october, when the changes had the biggest impact, but since then, I haven't been featured at all. Prior to this, I would be featured for keywords on almost any article, not necessarily on the 1st position, but I was almost always there. Posts still show up in the dedicated News category, but not in the main search pages. I've seen a lot of websites being impacted, but some with lower ranks than mine still show up there. I haven't done any changes prior to the 6th of october, and I haven't done any link building campaings, just getting links from higher ranking news sites in my country, for articles I wrote. What I'd like to know is if there were any major changes for Google News and I'm not complying with any of them, or If i could check to see if there are any other problems. I don't have any penalties disclosed by Google, and no new errors in the Webmasters console, I'm just baffled by the fact that overnight the website was completely cut off from being featured in Google News. And one other strange thing, I'm now ranking better for searches that are kind of opposite to my website's main theme. Think about mainly writing about BMW, and less about AUDI, but ranking a lot better for the latter, and a lot less for the other. Thank you.
Technical SEO | | thefrost0 -
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
How to rank in Google Places
Normally, I don't have a problem with local SEO (more of a multi-channel sort of online marketing guy) but this one has got me scratching my head. Look at https://www.google.co.uk/search?q=wedding+venues+in+essex Theres two websites there (fennes and quendon park) that both have a much more powerful DA but don't appear in the Google Places (Google + Business or whatever it's labeled as). Why are websites such as Boreham house ranking top in the map listings? Quendon Park has a Google places listing, it's full of content, the NAP all matches up. Its a stronger website. Boreham House isn't any closer to the centroid than Quendon Park Just got me struggling this one
Technical SEO | | jasonwdexter0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Robots.txt not working?
Hello This is my robots.txt file http://www.theprinterdepo.com/Robots.txt However I have 8000 warnings on my dashboard like this:4 What am I missing on the file¿ Crawl Diagnostics Report On-Page Properties <dl> <dt>Title</dt> <dd>Not present/empty</dd> <dt>Meta Description</dt> <dd>Not present/empty</dd> <dt>Meta Robots</dt> <dd>Not present/empty</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> </dl> URL: http://www.theprinterdepo.com/catalog/product_compare/add/product/100/uenc/aHR0cDovL3d3dy50aGVwcmludGVyZGVwby5jb20vaHAtbWFpbnRlbmFjZS1raXQtZm9yLTQtbGo0LWxqNS1mb3ItZXhjaGFuZ2UtcmVmdWJpc2hlZA,,/ 0 Errors No errors found! 1 Warning 302 (Temporary Redirect) Found about 5 hours ago <a class="more">Read More</a>
Technical SEO | | levalencia10 -
Google plus
" With a single Google search, you can see regular search results, along with all sorts of results that are tailored to you -- pages shared with you by your friends, Google+ posts from people you know" Would i be able to see my own post which i shared with someone in my Google plus circle, when i do a search ?
Technical SEO | | seoug_20050 -
Google Shopping Australia/Google Merchant Centre
So Google Shopping has finally landed in Australia so we've got some work todo hooking it up to our client ecom sites. Right now we have a handful of clients who are setup, the feed is getting in their ok but all products are sitting in "disapproved" status in the dashboard and clicking into each individual product the status says awaiting review. I logged a support ticket with Google to get some more info on this as it doesn't look right to me (ie the disapproved status in dashboard) and got a useless templated answer. Seems that if I switch the country destination to US the products are approved and live in google.com shopping search within the hour. Switch back to Australia and they go back to disapproved status. Anyone having the same issue/seen this before? I simply don't trust Google support and wondering if there's other factors at play here.
Technical SEO | | Brendo0 -
Google Website Optimizer
So if you are AB testing two pages: index.html and indexB.html Shouldn't I nofollow indexB.html? It has all the same content, just a different design.
Technical SEO | | tylerfraser0