I accidentally blocked Google with Robots.txt. What next?
-
Last week I uploaded my site and forgot to remove the robots.txt file with this text:
User-agent: * Disallow: /
I dropped from page 11 on my main keywords to past page 50.
I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too.
Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix.
In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more."
How will this affect me long-term? When will I recover my rankings? Is there anything else I can do?
Thanks for your input!
-
Excellent. Good luck on your climb to page 1.
-
RESULTS: So Google re-crawled my site at some point since I fetched it through Webmaster Tools. My page in the search results now has my title and meta description back and I bounced back to page 15. Page 15 isn't too abnormal, as before I accidentally blocked myself I had been bouncing around from page 11 to 18.
Thanks for our input and hopefully this will help someone in the future.
-
-
Thanks for the input. Where do I fetch my URL within GWT? Any ideas on how quickly I'll regain my rankings? Will I have to make up some ground with my SEO work, or will Google just place me back where I was once they re-crawl my site?
-
I agree with Sebastian's follow up suggestions. It's way quicker to de-index yourself than it is to get those rankings back. Even one the robots is "cleared" it can take time to get back to where you were.
-
Hi,
Sorry to hear you're having issues, we've all accidently blocked off our site at one point or another I'm sure.
You've done pretty much everything you can do to get your site re-indexed. The only other thing I could suggest is to do a fetch URL within GWT, then take it a step further by submitting your URL and all linked pages.
If you haven't recovered within a week or so, then it may be an idea to do a reconsideration request.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Little confused regarding robots.txt
Hi there Mozzers! As a newbie, I have a question that what could happen if I write my robots.txt file like this... User-agent: * Allow: / Disallow: /abc-1/ Disallow: /bcd/ Disallow: /agd1/ User-agent: * Disallow: / Hope to hear from you...
Technical SEO | | DenorL0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
Google not using redirect
We have a GEO-IP redirect in place for our domain, so that users are pointed to the subfolder relevant for their region, e.g: Visit example.com from the UK and you will be redirected to example.com/uk This works fine when you manually type the domain into your browser, however if you search for the site and come to example.com, you end up at example.com I didn't think this was too much of an issue but our subfolders /uk and /au are not getting ranked at all in Google, even for branded keywords. I'm wondering if the fact that Google isn't picking up the redirect means that the pages aren't being indexed properly? Conversely our US region (example.com/us) is being ranked well. Has anyone encountered a similar issue?
Technical SEO | | ahyde0 -
Robots.txt Question
In the past, I had blocked a section of my site (i.e. domain.com/store/) by placing the following in my robots.txt file: "Disallow: /store/" Now, I would like the store to be indexed and included in the search results. I have removed the "Disallow: /store/" from the robots.txt file, but approximately one week later a Google search for the URL produces the following meta description in the search results: "A description for this result is not available because of this site's robots.txt – learn more" Is there anything else I need to do to speed up the process of getting this section of the site indexed?
Technical SEO | | davidangotti0 -
Do I need robots.txt and meta robots?
If I can manage to tell crawlers what I do and don't want them to crawl for my whole site via my robots.txt file, do I still need meta robots instructions?
Technical SEO | | Nola5040 -
Why do I see dramatic differences in impressions between Google Webmaster Tools and Google Insights for Search?
Has anyone else noticed discrepancies between these tools? Take keyword A and keyword B. I've literally seen situations where A has 3 or 4 times the traffic as B in Google Webmaster Tools, but half the traffic of B in Google Insights for Search. What might be the reason for this discrepancy?
Technical SEO | | ir-seo-account0 -
Are Google now indexing iFrames?
A client is pulling content through an iFrame, and when searching for a snippet of that exact content the page that is pulling the data is being indexed and not the iFrame page. Seen this before?
Technical SEO | | White.net0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0