Accidentally blocked Googlebot for 14 days
-
Today after I noticed a huge drop in organic traffic to inner pages of my sites, I looked into the code and realized a bug in last commit cause the server to showing captcha pages to all Googlebot requests from Apr 24.
My site has more than 4,000,000 in the index. Before last code change, Googlebot are exempt from being shown the captcha requests so each inner pages are crawled and indexed perfectly with no problem.
The bug broke the whitelisting mechanism and treat requests from Google's ip addresses the same as regular users. It leads to the captcha page being crawled when Googlebot visits thousands of my site's inner pages. This makes Google thinks all my inner pages are identical to each other. Google remove all the inner pages from SERP starting from May 5th before when many of those inner pages have good rankings.
I formerly thought this was a manual or algorithm penalty but
1. I did not receive a warning message in GWT
2. The ranking for main url is good.I tried with "Fetch as Google" in GWT and realize all Googlebot saw in the past 14 days are the same captcha page for all my inner pages.
Now, I have fixed the bug and updated the production site. I just wanted to ask:
1. How long will it take for Google to remove the "duplicated content" flag on my inner pages and show them in SERP again? From my experience, Googlebot revisits urls quite often. But once a url is flagged as "contains similar content", it could be difficult to recover, is it correct?
2. Besides waiting for Google to update its index, what else can I do right now?
Thanks in advance for your answers.
-
Thanks for the info. My site has current crawl rate at 350,00 pages per day so will take 10-20 days to crawl the entire sites.
Most of organic traffic comes to 10,000 urls while others are pagination urls etc. Now all the traffic 1st inner page of each term disappeared in the results of inurl: command.
-
One of my competitors made this type of error and we figured it out right away when their site dropped from the SERPs. It took them a couple weeks to figure it out and make the change. We were hoping that they never figured it out so we could rake in lots of dough. When they fixed it they were back in the SERPs at full strength within a couple of days.... . but they had 40 indexed pages instead of 4,000,000.
I think you will recover well, but might take a while if you don't have a lot of deep links.
Good luck.
-
Pretty much all you can do is wait for Google to recrawl your entire site. You can try re-submitting your site in Webmaster Tools (Health -> Fetch As Google). Getting links from other sites will help speed up the crawling as well. Links from social sites like Twitter/Google+ can help with crawling also.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify robots blocking stylesheets causing inconsistent mobile-friendly test results?
One of our shopify sites suffered an extreme rankings drop. Recent Google algorithm updates include mobile first so I tested the site and our team got different mobile-friendly test results. However, search console is also flagging pages as not mobile friendly. So, while us end-users see the site as OK on mobile, this may not be the case for Google? I researched more about inconsistent mobile test results and found answers that say it may be due to robots.txt blocking stylesheets. Do you recognise any directory blocked that might be affecting Google's rendering? We can't edit shopify robots.txt unfortunately. Our dev said the only thing that stands out to him is Disallow: /design_theme_id and the rest shouldn't be hindering Google bots. Here are some of the files blocked: Disallow: /admin
Technical SEO | | nhhernandez
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9103034/checkouts
Disallow: /9103034/orders
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /discount/*
Disallow: /gift_cards/*
Disallow: /apple-app-site-association0 -
Pages Crawl Per Day Gone Drasitcaly Down, is it google issue?
Hello Expert, In search console in Crawl Stats Pages Crawl per day going day by day i.e. from 4 lac pages per day now it is reduce upto 2 lac in last 15 days. So where is the issue? Where I am going wrong or it is issue from google end? Thanks!
Technical SEO | | Johny123450 -
I have a client whose shop will be down a few days. What would provide less impact to organic search program?
I have a client who is moving their warehouse, and their shop will be down for four days. I have been doing some research on the best ways to handle this and I wanted to get the communities feedback on this. One thought is to have the pages live, but people can't place an order - but this does not provide the best customer experience. Another thought is to just do temporary redirects for the shop pages, to land on the "sorry we are moving" page for customers. Another thought was to do 503 HTTP status codes on the pages and then do a temporary redirect to the landing page. Have any of you experienced this issue? If so, what did you do to minimize the impact to the organic search programs? NOTE: All of their static content will remain in tact. Only the shop/store will be down.
Technical SEO | | smulto0 -
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
Recovering from Blocked Pages Debaucle
Hi, per this thread: http://www.seomoz.org/q/800-000-pages-blocked-by-robots We had a huge number of pages blocked by robots.txt by some dynamic file that must have integrated with our CMS somehow. In just a few weeks hundreds of thousands of pages were "blocked." This number is now going down, but instead of by the hundreds of thousands, it is going down by the hundreds and very sloooooowwwwllly. So, we really need to speed up this process. We have our sitemap we will re-submit, but I have a few questions related to it: Previously the sitemap had the <lastmod>tag set to the original date of the page. So, all of these pages have been changed since then. Any harm in doing a mass change of the <lastmod>field? It would be an accurate reflection, but I don't want it to be caught by some spam catcher. The easy thing to do would be to just set that date to now, but then they would all have the same date. Any other tips on how to get these pages "unblocked" faster? Thanks! Craig</lastmod></lastmod>
Technical SEO | | TheCraig0 -
What does it mean by 'blocked by Meta Robot'? How do I fix this?
When i get my crawl diagnostics, I am getting a blocked by Meta Robot, which means that my page is not being indexed in the search engines... obviously this is a major issue for organic traffic!!! What does it actually mean, and how can i fix it?
Technical SEO | | rolls1230 -
Development site accidentally got indexed and now appears in SERPs. How to fix?
I work at a design firm, and we just redesigned a website for a client. When it came time for the coding, we initially built a development site to work out all the kinks before going live. Then we relaunched the actual site about a week ago. Here's the problem: Somehow, the developer who coded the site for us (a freelancer) allowed the development site to be indexed by Google. Now, when you enter the client's name into Google, the development site appears higher in the results pages than the real site! In fact, the real site isn't even in the top 50 search results. The client is understandably angry about this for multiple reasons. We quickly added a robots.txt file to the development site and a 301 redirect to the real site. However, that did seemed to have no effect on the problem. Any ideas on how to fix this mess? Thank you in advance!
Technical SEO | | matt-145670 -
Can anyone please make my day with this domain
Hi i have been waiting for a long time to buy [removed by admin] as i have the .co.uk name and it is now up for sale for 12 dollars. but here is the problem. i am in spain at the moment and not due back for two weeks, i have joined go daddy as they say they have it up for sale at 12 dollars but after joining the site and trying to buy it i am coming across major problems as the screen is just coming blank. it is not taking me direct to the auction or buy now section of the site and i do not know what is going on. spent nearly four hours trying to sort this out. would anyone please help me find out what is going on as i really need this domain name for my site so i can start using the .com for the site and replace it with the .co.uk
Technical SEO | | ClaireH-1848860