Accidentally blocked Googlebot for 14 days
-
Today after I noticed a huge drop in organic traffic to inner pages of my sites, I looked into the code and realized a bug in last commit cause the server to showing captcha pages to all Googlebot requests from Apr 24.
My site has more than 4,000,000 in the index. Before last code change, Googlebot are exempt from being shown the captcha requests so each inner pages are crawled and indexed perfectly with no problem.
The bug broke the whitelisting mechanism and treat requests from Google's ip addresses the same as regular users. It leads to the captcha page being crawled when Googlebot visits thousands of my site's inner pages. This makes Google thinks all my inner pages are identical to each other. Google remove all the inner pages from SERP starting from May 5th before when many of those inner pages have good rankings.
I formerly thought this was a manual or algorithm penalty but
1. I did not receive a warning message in GWT
2. The ranking for main url is good.I tried with "Fetch as Google" in GWT and realize all Googlebot saw in the past 14 days are the same captcha page for all my inner pages.
Now, I have fixed the bug and updated the production site. I just wanted to ask:
1. How long will it take for Google to remove the "duplicated content" flag on my inner pages and show them in SERP again? From my experience, Googlebot revisits urls quite often. But once a url is flagged as "contains similar content", it could be difficult to recover, is it correct?
2. Besides waiting for Google to update its index, what else can I do right now?
Thanks in advance for your answers.
-
Thanks for the info. My site has current crawl rate at 350,00 pages per day so will take 10-20 days to crawl the entire sites.
Most of organic traffic comes to 10,000 urls while others are pagination urls etc. Now all the traffic 1st inner page of each term disappeared in the results of inurl: command.
-
One of my competitors made this type of error and we figured it out right away when their site dropped from the SERPs. It took them a couple weeks to figure it out and make the change. We were hoping that they never figured it out so we could rake in lots of dough. When they fixed it they were back in the SERPs at full strength within a couple of days.... . but they had 40 indexed pages instead of 4,000,000.
I think you will recover well, but might take a while if you don't have a lot of deep links.
Good luck.
-
Pretty much all you can do is wait for Google to recrawl your entire site. You can try re-submitting your site in Webmaster Tools (Health -> Fetch As Google). Getting links from other sites will help speed up the crawling as well. Links from social sites like Twitter/Google+ can help with crawling also.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
Blocking certain countries via IP address location
We are a US based company that ships only to US and Canada. We've had two issues arise recently from foreign countries (Russia namely) that caused us to block access to our site from anyone attempting to interact with our store from outside of the US and Canada. 1. The first issue we encountered were fraudulent orders originating from Russia (using stolen card data) and then shipping to a US based International shipping aggregator. 2. The second issue was a consistent flow of Russian based "new customer" entries. My question to the MOZ community is this: are their any unintended consequences, from an SEO perspective, to blocking the viewing of our store from certain countries.
Technical SEO | | MNKid150 -
Google webmaster… Zopim Live chat blocking the resources
I have noticed that Zopim Live chat blocking the resources (URL’s). What are the reasons? Now I have fetched blocked urls again for indexing purpose but still I want to know the reason. Any help from experts??? yYJJ8oM.jpg
Technical SEO | | udistm0 -
Can Google Anlaytics Segment By Time of the DaY?
Greetings from Latitude 53.92705600 Longitude -1.38481600... Can Google analytics anser this question..."Tell me on the 1st Sept how many visitors landed on my site between 1200HRS & 1300HRS" Grazie Tanto,
Technical SEO | | Nightwing
David0 -
Pages crawled is only 23 even after 8 days??
Hello all, My site www.practo.com has at least more than 500+ pages. Still seomoz says its only 23 crawled till date even after 8 -10 days of the trial period. Now most of the pages on my site are in-site search pages. They appear when you search relevant terms with combinations etc. Is that hindering the moz crawler to look for those pages? Aditya
Technical SEO | | shanky11 -
Googlebot cannot access your site
"At the end of July I received a message in my Google webmaster tools saying "Googlebot can't access your site" We checked our robots.txt file and removed a line break in it, and then I had Google Fetch the file again. I have not received any more messages since then. When we created the website I wrote all of the content and optimized each page for about 1 local keyword. A few weeks after I checked my keywords and did have a few on the first page of google. Since then almost all of them have completely disappeared. Because we had not link building effort I would not expect to still be on the first page, but I should definitely be seeing them before the 5th or even 10th page of Google. The address is http://www.tile-pompanobeach.com I'm not sure if these horrible results have something to do with the message from Google or something else. The problem is this client now wants to sign a contract with us for SEO and I really have no Idea what happened and if I will be able to figure it out. The main keyword for my home page is tile pompano beach and I aslo was using Pompano Beach Tile store for the About page which was previously on the first page of Google. Does anyone have some input?
Technical SEO | | DTOSI0 -
I am trying to block robots from indexing parts of my site..
I have a few websites that I mocked up for clients to check out my work and get a feel for the style I produce but I don't want them indexed as they have lore ipsum place holder text and not really optimized... I am in the process of optimizing them but for the time being I would like to block them. Most of my warnings and errors on my seomoz dashboard are from these sites and I was going to upload the folioing to the robot.txt file but I want to make sure this is correct: User-agent: * Disallow: /salondemo/ Disallow: /salondemo3/ Disallow: /cafedemo/ Disallow: /portfolio1/ Disallow: /portfolio2/ Disallow: /portfolio3/ Disallow: /salondemo2/ is this all i need to do? Thanks Donny
Technical SEO | | Smurkcreative0 -
Should I Block Tag, Category, Author Pages
Just finished reviewing the first crawl of my first SEOmoz campaign for a site that I am working on. The site I"m working on uses Wordpress as a CMS, and most if not all of the warnings and notices have to do with author, category, and tag pages. Should I block these from being indexed? Why or why not?
Technical SEO | | Falconberg0