Client accidently blocked entire site with robots.txt for a week
-
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues.
Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in.
The error has been corrected, but...
Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly?
Thanks for all of your help.
-
Here's a YouMoz post that was promoted to the main blog about what someone else did in this situation that may help.
http://www.seomoz.org/blog/accidental-noindexation-recovery-strategy-amp-results
A couple of preventative steps would have been to make the robots.txt file on the live site read-only so it couldn't have been as easily overwritten, and to use a free service like Pole Position's Code Monitor (https://polepositionweb.com/roi/codemonitor/index.php) to monitor the contents of your robots.txt file once a day and email you if there are changes. I'd also monitor your dev robots.txt, just to make sure the live site robots.txt doesn't get copied over to dev one day and your dev site gets indexed (I've had that happen!).
-
I can't say anything about robots.txt
.... but one of my competitors tossed up a new design with nofollow, noindex tags on every page and their site immediately tanked out of Google.
... it took them a couple weeks to figure it out but once they yanked that line of code they were back at topSERPs within 48 hours.
... this was a relatively strong site and I would expect that type of site recovers faster than a PR2 site with little connectivity.
-
Hi, have you tried logging in to Google Webmaster tools and fetching the URL as googlebot? This helped me recently with a couple of sites that I had blocked with robots.txt. They were up-to-date in SERP's within 2 days.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our clients Magento 2 site has lots of obsolete categories. Advice on SEO best practice for setting server level redirects so I can delete them?
Our client's Magento website has been running for at least a decade, so has a lot of old legacy categories for Brands they no longer carry. We're looking to trim down the amount of unnecessary URL Redirects in Magento, so my question is: Is there a way that is SEO efficient to setup permanent redirects at a server level (nginx) that Google will crawl to allow us at some point to delete the categories and Magento URL Redirects? If this is a good practice can you at some point then delete the server redirects as google has marked them as permanent?
Technical SEO | | Breemcc0 -
Redirecting old html site to new wordpress site
Hi I'm currently updating an old (8 years old) html site to wordpress and about a month ago I redirected some url's to the new site (which is in a directory) like this... Redirect 301 /article1.htm http://mysite.net/wordpress/article1/
Technical SEO | | briandee
Redirect 301 /article2.htm http://mysite.net/wordpress/article2/
Redirect 301 /article3.htm http://mysite.net/wordpress/article3/ Google has indexed these new url's and they are showing in search results. I'm almost finished the new version of site and it is currently in a directory /wordpress I intend to move all the files from the directory to the root so new url when this is done will be http://mysite.net/article1/ etc My question is - what to I do about the redirects which are in place - do I delete them and replace with something like this? Redirect 301 /wordpress/article1/ http://mysite.net/article1/
Redirect 301 /wordpress/article2/ http://mysite.net/article2/
Redirect 301 /wordpress/article3/ http://mysite.net/article3/ Appreciate any help with this0 -
Question about Robot.txt
I just started my own e-commerce website and I hosted it to one of the popular e-commerce platform Pinnacle Cart. It has a lot of functions like, page sorting, mobile website, etc. After adjusting the URL parameters in Google webmaster last 3 weeks ago, I still get the same duplicate errors on meta titles and descriptions based from Google Crawl and SEOMOZ crawl. I am not sure if I made a mistake of choosing pinnacle cart because it is not that flexible in terms of editing the core website pages. There is now way to adjust the canonical, to insert robot.txt on every pages etc. however it has a function to submit just one page of robot.txt. and edit the .htcaccess. The website pages is in PHP format. For example this URL: www.mycompany.com has a duplicate title and description with www.mycompany.com/site-map.html (there is no way of editing the title and description of my sitemap) Another error is www.mycompany.com has a duplicate title and description with http://www.mycompany.com/brands?url=brands Is it possible to exclude those website with "url=" and my "sitemap.html" in the robot.txt? or the URL parameters from Google is enough and it just takes a lot of time. Can somebody help me on the format of Robot.txt. Please? thanks
Technical SEO | | paumer800 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0 -
How to turn WP site into Ecom site?
I have a couple of old wordpress sites that are old affiliate blogs. I currently sell products that are on amazon on these sites and sell quite a bit of volume. I have found a source and can afford the inventory to replace Amazon with my own product. So the dilema is how to turn these wordpress sites into ecommerce sites. The thing I am worried most about is that each site gets about 100-200 visitors a day for great buying keywords. I obviously don't want to lose my rankings. What are the options of turning a wordpress site into a store. I am not interested in plugins or some of the other solutions that make the store look very cheap and I would assume horribly convert. If you have inner pages ranking for keywords how does that work? Do the post pages become product pages? So to sum up I guess I am asking, what are the options that are of the higher quality that will also help me keep my rankings? Thanks
Technical SEO | | PEnterprises0 -
Does 301 redirecting a site multiple times keep the value of the original site?
Hi, All! If I 301 redirect site www.abc.com to www.def.com, it should pass (almost) all linkjuice, rank, trust, etc. What happens if I then redirect site www.def.com to www.ghi.com? Does the value of the original site pass indefinitely as long as you do the redirects correctly? Or does it start to be devalued at some point? If anyone's had experience redirecting a site more than once and they've seen reportable good/bad/neutral results, that would be very helpful. Thanks in advance! -Aviva B
Technical SEO | | debi_zyx0 -
Robots.txt question
I want to block spiders from specific specific part of website (say abc folder). In robots.txt, i have to write - User-agent: * Disallow: /abc/ Shall i have to insert the last slash. or will this do User-agent: * Disallow: /abc
Technical SEO | | seoug_20050