Severe rank drop due to overwritten robots.txt
-
Hi,
Last week we made a change to drupal core for an update to our website. We accidentally overwrote our good robots.txt that blocked hundreds of pages with the default drupal robots.txt. Several hours after that happened (and we didn't catch the mistake) our rankings dropped from mostly first, second place in Google organic to bottom and mid first page.
Basically I believe we flooded the index with very low quality pages at once and threw a red flag and we got de-ranked.
We have since fixed the robots.txt and have been re-crawled but have not seen a return in rank.
Would this be a safe assumption of what happened? I haven't seen any other sites getting hit in the retail vertical yet in regards to any Panda 2.3 type of update.
Will we see a return in our results anytime soon?
Thanks,
Justin
-
Your present approach is correct. Ensure all these pages are tagged as noindex for now. Remove the block from robots.txt and let Google and Bing crawl these pages.
I would suggest waiting until you are confident all the pages were removed from Google's index, then check Yahoo and Bing. If you decide that robots.txt is the best decision for your company, then you can replace the disallows after confirming your site is no longer affected by these pages.
I would also suggest that, going forward, you ensure any new pages on your site that you do not wish to index always include the appropriate meta tag. If this issue happens again then you will have a layer of protection in place.
-
We're pretty confident thus far that we have flooded the index with about 15,000 low rank URLs all at once. This has happened once in the past a few years back but we didn't flood their index, they were newer pages at the time in which were low quality and could have been seen as spam since there was no real content but adsense so we removed them with a disallow in robots.
We are adding the meta no-index to all of these pages. You're saying we should remove the disallow in robots.txt so googlebot can crawl these pages and see the meta-noindex?
We are a very large site and we're crawled often. We're a PR7 site and MOZrank DA is 79/100. We have dropped from 82.
We're hoping these URLs will be removed quickly, I don't think there is a way of removing 15k links in GWMT without setting off flags also.
-
There is no easy answer for how long it will take.
If your theory about the ranking drop being caused by these pages being added is correct, then as these pages are removed from Google's index, your site should improve. The timeline depends on the size of your site, your site's DA, the PA and links for these particular pages, etc.
If it was my site I would mark the calendar for August 1st to review the issue. I would check all the pages which were mistakenly indexed to be certain they were removed. After, I would check the rankings.
-
Hi Ryan,
Thanks for your response. Actually you are correct. We have found some of the pages that should be no follows still indexed. We are now going to use the noindex, follow meta tags on these pages because we can't afford to have theses pages indexed as they are particularly for clients/users only and are very low quality and have been flagged before.
Now, how long until we see our rank move back? Thats the real big question.
Thanks so much for your help.
Justin
-
That's a great answer Ryan... I wonder, just out of curiosity, if it wouldn't hurt to look at the cached version of the pages if they're indexed? I'd be curious to know if the date they were cached is right near when the robots.txt was changed? I know it wouldn't alter his course of action, but might add further confirmation that this caused the problem?
-
Justin,
Based on the information you provided it's not possible to determine if the robots.txt file was part of the issue. You need to investigate the matter further. Using Google enter a query in an attempt to find some of the previously blocked content. For example, let's assume your site is about SEO but you shared a blog article about your movie review of the latest Harry Potter movie. You may have used robots.txt to block that article because it is unrelated to your site's focus. Perform a search for "Harry Potter insite:mysite.com" replacing mysite.com with your main web address. If the search returns your article, then you know the content was indexed. Try this approach for several of your previously blocked areas of the website.
If you find this content in SERPs, then you need to have it removed. The best thing to do is add the "noindex, follow" tags to all these pages, then remove the block from your robots.txt file.
The problem is that with the block in place on your robots.txt file, Google cannot see the new meta tag and does not know to remove the content from it's index.
One last item to mention. Google does have a URL removal tool but that would not be appropriate in this instance. That tool is designed to remove a page which causes direct damage by being in the index. Trade secrets or other confidential information can be removed with this tool.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
How can I make it so that robots.txt is not ignored due to a URL re-direct?
Recently a site moved from blog.site.com to site.com/blog with an instruction like this one: /etc/httpd/conf.d/site_com.conf:94: ProxyPass /blog http://blog.site.com
Technical SEO | | rodelmo4
/etc/httpd/conf.d/site_com.conf:95: ProxyPassReverse /blog http://blog.site.com It's a Wordpress.org blog that was set as a subdomain, and now is being redirected to look like a directory. That said, the robots.txt file seems to be ignored by Google bot. There is a Disallow: /tag/ on that file to avoid "duplicate content" on the site. I have tried this before with other Wordpress subdomains and works like a charm, except for this time, in which the blog is rendered as a subdirectory. Any ideas why? Thanks!0 -
Clarification regarding robots.txt protocol
Hi,
Technical SEO | | nlogix
I have a website , and having 1000 above url and all the url already got indexed in Google . Now am going to stop all the available services in my website and removed all the landing pages from website. Now only home page available . So i need to remove all the indexed urls from Google . I have already used robots txt protocol for removing url. i guess it is not a good method for adding bulk amount of urls (nearly 1000) in robots.txt . So just wanted to know is there any other method for removing indexed urls.
Please advice.0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Rank tracker and Rankings report differs
Hi all Is this normal? I have set up a campaign for a site. Tracking a variety of keywords. For one of them, which is a quite important keyword I've been working on I've moved down one step in my rankings report. This is first of all weird because my on page optimization went from grade c to a, and even weirder beacuse if I run Rank Tracker tool on the keyword and the URL I see that I've moved up 6 steps, to 15 in Google. Kinda makes it hard to grasp if I'm on the right path or not! (I've checked and they are both results on google.dk, same URL and same keyword - exact)
Technical SEO | | Budskab0 -
Google rankings dropped dramatically after 24 hrs of hosting suspension
Hi, One of my websites ( http://www.traveldestinationsearch.com/ ) dropped most of its Google rankings after 24 hours of hosting suspension (from April 26 until April 27, 2012). The hosting company suspended my website after exceeding the bandwidth limit: there was no unusual activity on my website, it just exceeded its bandwidth limit by 20-30MB for the previous month. Anyway, the website is back online since April 27 but the problem is that, following these 24 hrs of no service, I see a dramatic decrease of my website's Google rankings for its main keywords. Even today, April 29, I can't find my website anywhere in the first 100 results for most of its targeted keywords. Before the suspension, the website ranked #1 for its main keyword and somewhere in the first 2-3 pages of Google search results for other two main keywords. My question is: is it the hosting suspension the reason for the Google ranking drop, and (assuming this is a temporary problem) when do you think I should expect my website to regain the rankings it had before the hosting suspension? Thanks for your support. Regards, Adrian
Technical SEO | | AdrianBanu0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
301 redirect dropped page rank
Hi, We have a www domain that I have changed to a non www domain. The www domain had been in place for some time and had a good page rank, PR4. After this change the page rank dropped significantly (PR0, and now recently back to PR2) despite it being a 301 redirect which I thought "should" carry over the page rank. Yes, I am aware I should have just left it be. Hind sight 20/20 .. ya ya ya 🙂 My questions Is the 301 the correct method for this? Why did the page rank drop despite the 301? Should we go back to the www domain at this point? Thanks Kris
Technical SEO | | adriot0