Client accidently blocked entire site with robots.txt for a week
-
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues.
Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in.
The error has been corrected, but...
Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly?
Thanks for all of your help.
-
Here's a YouMoz post that was promoted to the main blog about what someone else did in this situation that may help.
http://www.seomoz.org/blog/accidental-noindexation-recovery-strategy-amp-results
A couple of preventative steps would have been to make the robots.txt file on the live site read-only so it couldn't have been as easily overwritten, and to use a free service like Pole Position's Code Monitor (https://polepositionweb.com/roi/codemonitor/index.php) to monitor the contents of your robots.txt file once a day and email you if there are changes. I'd also monitor your dev robots.txt, just to make sure the live site robots.txt doesn't get copied over to dev one day and your dev site gets indexed (I've had that happen!).
-
I can't say anything about robots.txt
.... but one of my competitors tossed up a new design with nofollow, noindex tags on every page and their site immediately tanked out of Google.
... it took them a couple weeks to figure it out but once they yanked that line of code they were back at topSERPs within 48 hours.
... this was a relatively strong site and I would expect that type of site recovers faster than a PR2 site with little connectivity.
-
Hi, have you tried logging in to Google Webmaster tools and fetching the URL as googlebot? This helped me recently with a couple of sites that I had blocked with robots.txt. They were up-to-date in SERP's within 2 days.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
Shopify robots blocking stylesheets causing inconsistent mobile-friendly test results?
One of our shopify sites suffered an extreme rankings drop. Recent Google algorithm updates include mobile first so I tested the site and our team got different mobile-friendly test results. However, search console is also flagging pages as not mobile friendly. So, while us end-users see the site as OK on mobile, this may not be the case for Google? I researched more about inconsistent mobile test results and found answers that say it may be due to robots.txt blocking stylesheets. Do you recognise any directory blocked that might be affecting Google's rendering? We can't edit shopify robots.txt unfortunately. Our dev said the only thing that stands out to him is Disallow: /design_theme_id and the rest shouldn't be hindering Google bots. Here are some of the files blocked: Disallow: /admin
Technical SEO | | nhhernandez
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9103034/checkouts
Disallow: /9103034/orders
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /discount/*
Disallow: /gift_cards/*
Disallow: /apple-app-site-association0 -
Moving site from html to Wordpress site: Should I port all old pages and redirect?
Any help would be appreciated. I am porting an old legacy .html site, which has about 500,000 visitors/month and over 10,000 pages to a new custom Wordpress site with a responsive design (long overdue, of course) that has been written and only needs a few finishing touches, and which includes many database features to generate new pages that did not previously exist. My questions are: Should I bother to port over older pages that are "thin" and have no incoming links, such that reworking them would take time away from the need to port quickly? I will be restructuring the legacy URLs to be lean and clean, so 301 redirects will be necessary. I know that there will be link juice loss, but how long does it usually take for the redirects to "take hold?" I will be moving to https at the same time to avoid yet another porting issue. Many thanks for any advice and opinions as I embark on this massive data entry project.
Technical SEO | | gheh20130 -
"Url blocked by robots.txt." on my Video Sitemap
I'm getting a warning about "Url blocked by robots.txt." on my video sitemap - but just for youtube videos? Has anyone else encountered this issue, and how did you fix it if so?! Thanks, J
Technical SEO | | Critical_Mass0 -
Adding directories to robots nofollow cause pages to have Blocked Resources
In order to eliminate duplicate/missing title tag errors for a directory (and sub-directories) under www that contain our third-party chat scripts, I added the parent directory to the robots disallow list. We are now receiving a blocked resource error (in Webmaster Tools) on all of the pages that have a link to a javascript (for live chat) in the parent directory. My host is suggesting that the warning is only a notice and we can leave things as is without worrying about the page being de-ranked/penalized. I am wondering if this is true or if we should remove the one directory that contains the js from the robots file and find another way to resolve the duplicate title tags?
Technical SEO | | miamiman1000 -
What is the best way to find missing alt tags on my site (site wide - not page by page)?
I am looking to find all the missing alt tags on my site at once. I have a FF extension that use to do it page by page, but my site is huge and that will take forever. Thanks!!
Technical SEO | | franchisesolutions1 -
Googlebot cannot access your site
"At the end of July I received a message in my Google webmaster tools saying "Googlebot can't access your site" We checked our robots.txt file and removed a line break in it, and then I had Google Fetch the file again. I have not received any more messages since then. When we created the website I wrote all of the content and optimized each page for about 1 local keyword. A few weeks after I checked my keywords and did have a few on the first page of google. Since then almost all of them have completely disappeared. Because we had not link building effort I would not expect to still be on the first page, but I should definitely be seeing them before the 5th or even 10th page of Google. The address is http://www.tile-pompanobeach.com I'm not sure if these horrible results have something to do with the message from Google or something else. The problem is this client now wants to sign a contract with us for SEO and I really have no Idea what happened and if I will be able to figure it out. The main keyword for my home page is tile pompano beach and I aslo was using Pompano Beach Tile store for the About page which was previously on the first page of Google. Does anyone have some input?
Technical SEO | | DTOSI0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0