Client accidently blocked entire site with robots.txt for a week
-
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues.
Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in.
The error has been corrected, but...
Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly?
Thanks for all of your help.
-
Here's a YouMoz post that was promoted to the main blog about what someone else did in this situation that may help.
http://www.seomoz.org/blog/accidental-noindexation-recovery-strategy-amp-results
A couple of preventative steps would have been to make the robots.txt file on the live site read-only so it couldn't have been as easily overwritten, and to use a free service like Pole Position's Code Monitor (https://polepositionweb.com/roi/codemonitor/index.php) to monitor the contents of your robots.txt file once a day and email you if there are changes. I'd also monitor your dev robots.txt, just to make sure the live site robots.txt doesn't get copied over to dev one day and your dev site gets indexed (I've had that happen!).
-
I can't say anything about robots.txt
.... but one of my competitors tossed up a new design with nofollow, noindex tags on every page and their site immediately tanked out of Google.
... it took them a couple weeks to figure it out but once they yanked that line of code they were back at topSERPs within 48 hours.
... this was a relatively strong site and I would expect that type of site recovers faster than a PR2 site with little connectivity.
-
Hi, have you tried logging in to Google Webmaster tools and fetching the URL as googlebot? This helped me recently with a couple of sites that I had blocked with robots.txt. They were up-to-date in SERP's within 2 days.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved URL dynamic structure issue for new global site where I will redirect multiple well-working sites.
Dear all, We are working on a new platform called [https://www.piktalent.com](link url), were basically we aim to redirect many smaller sites we have with quite a lot of SEO traffic related to internships. Our previous sites are some like www.spain-internship.com, www.europe-internship.com and other similars we have (around 9). Our idea is to smoothly redirect a bit by a bit many of the sites to this new platform which is a custom made site in python and node, much more scalable and willing to develop app, etc etc etc...to become a bigger platform. For the new site, we decided to create 3 areas for the main content: piktalent.com/opportunities (all the vacancies) , piktalent.com/internships and piktalent.com/jobs so we can categorize the different types of pages and things we have and under opportunities we have all the vacancies. The problem comes with the site when we generate the diferent static landings and dynamic searches. We have static landing pages generated like www.piktalent.com/internships/madrid but dynamically it also generates www.piktalent.com/opportunities?search=madrid. Also, most of the searches will generate that type of urls, not following the structure of Domain name / type of vacancy/ city / name of the vacancy following the dynamic search structure. I have been thinking 2 potential solutions for this, either applying canonicals, or adding the suffix in webmasters as non index.... but... What do you think is the right approach for this? I am worried about potential duplicate content and conflicts between static content dynamic one. My CTO insists that the dynamic has to be like that but.... I am not 100% sure. Someone can provide input on this? Is there a way to block the dynamic urls generated? Someone with a similar experience? Regards,
Technical SEO | | Jose_jimenez0 -
Robots.txt on http vs. https
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https. I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt? Strangely, I cannot find a single ressource about this...
Technical SEO | | zeepartner0 -
Block bad crawlers
Hi! how are you? I've been working on some of my sites, and noticed that i'm getting lots of crawls by search engines that i'm not intereted in ranking well. My question is the following: do you have a list of 'bad behaved' search engines that take lots of bandwidth and don´t send much/good traffic? If so, do you know how to block them using robots.txt? Thanks for the help! Best wishes, Ariel
Technical SEO | | arielbortz0 -
Robots.txt
I have a client who after designer added a robots.txt file has experience continual growth of urls blocked by robots,tx but now urls blocked (1700 aprox urls) has surpassed those indexed (1000). Surely that would mean all current urls are blocked (plus some extra mysterious ones). However pages still listing in Google and traffic being generated from organic search so doesnt look like this is the case apart from the rather alarming webmaster tools report any ideas whats going on here ? cheers dan
Technical SEO | | Dan-Lawrence0 -
One site per location or all under and umbrella site?
I am working on a project where we are re-branding lots (100+) existing local business under one national brand. I am wondering what we should do with their existing websites, they are generally fairly poor and will need re-designing to match the new brand but may have some residual links? 301 redirect the URL to the national site, e.g. nationalsite.com/localbusinessA? If so what should I look out for? Do I need to specifically redirect any pages that have links to them to the same pages on the new site? Or should I give them a new standalone website that they link back to the national brand site? More than likely this will be hosted on the same server and CMS as the main site just the URL will remain Do I need to make sure that any old URL's that had links to them are 301'd to the new pages? Many thanks for you advice.
Technical SEO | | BadgerToo0 -
Can search engines penalize my site if I block IPs from some countries?
I have spotted that some countries in South America generate lot's of traffic on my site and I don't want to sell my service there. Can I be penalized for blocking IPs from certain counties? Thanks!
Technical SEO | | Xopie0 -
Recently revamped site structure - now not even ranking for brand name, but lots of content - what happened? (Yup, the site has been crawled a few times since) Any ideas? Did I make a classic mistake? Any advise appreciated :)
I've completely disappeared off Google - what happened? Even my brand name keyword does not bring up my website - I feel lost, confused and baffled on what my next steps should be. ANY advice would be welcome, since there's no going back to the way the site was set up.
Technical SEO | | JeanieWalker0