Accidentally blocked Googlebot for 14 days
-
Today after I noticed a huge drop in organic traffic to inner pages of my sites, I looked into the code and realized a bug in last commit cause the server to showing captcha pages to all Googlebot requests from Apr 24.
My site has more than 4,000,000 in the index. Before last code change, Googlebot are exempt from being shown the captcha requests so each inner pages are crawled and indexed perfectly with no problem.
The bug broke the whitelisting mechanism and treat requests from Google's ip addresses the same as regular users. It leads to the captcha page being crawled when Googlebot visits thousands of my site's inner pages. This makes Google thinks all my inner pages are identical to each other. Google remove all the inner pages from SERP starting from May 5th before when many of those inner pages have good rankings.
I formerly thought this was a manual or algorithm penalty but
1. I did not receive a warning message in GWT
2. The ranking for main url is good.I tried with "Fetch as Google" in GWT and realize all Googlebot saw in the past 14 days are the same captcha page for all my inner pages.
Now, I have fixed the bug and updated the production site. I just wanted to ask:
1. How long will it take for Google to remove the "duplicated content" flag on my inner pages and show them in SERP again? From my experience, Googlebot revisits urls quite often. But once a url is flagged as "contains similar content", it could be difficult to recover, is it correct?
2. Besides waiting for Google to update its index, what else can I do right now?
Thanks in advance for your answers.
-
Thanks for the info. My site has current crawl rate at 350,00 pages per day so will take 10-20 days to crawl the entire sites.
Most of organic traffic comes to 10,000 urls while others are pagination urls etc. Now all the traffic 1st inner page of each term disappeared in the results of inurl: command.
-
One of my competitors made this type of error and we figured it out right away when their site dropped from the SERPs. It took them a couple weeks to figure it out and make the change. We were hoping that they never figured it out so we could rake in lots of dough. When they fixed it they were back in the SERPs at full strength within a couple of days.... . but they had 40 indexed pages instead of 4,000,000.
I think you will recover well, but might take a while if you don't have a lot of deep links.
Good luck.
-
Pretty much all you can do is wait for Google to recrawl your entire site. You can try re-submitting your site in Webmaster Tools (Health -> Fetch As Google). Getting links from other sites will help speed up the crawling as well. Links from social sites like Twitter/Google+ can help with crawling also.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
These days on Google results, it also shows the site map. I submitted my company's sitemap and it still does not show?What am I doing wrong?
Look at the image in the link. I want my company to look like the "pluralsight" website in Google. I want it to show the sitemap. I have already submitted the sitemap to Google few days back, what am I doing wrong? search?sourceid=chrome-psyapi2&ion=1&espv=2&ie=UTF-8&q=pluralsight&oq=pluralsight&aqs=chrome..69i57j0l5.11024j0j8
Technical SEO | | Deein0 -
"Url blocked by robots.txt." on my Video Sitemap
I'm getting a warning about "Url blocked by robots.txt." on my video sitemap - but just for youtube videos? Has anyone else encountered this issue, and how did you fix it if so?! Thanks, J
Technical SEO | | Critical_Mass0 -
GWT False Reporting or GoogleBot has weird crawling ability?
Hi I hope someone can help me. I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day. Example: http://www.mydomain.com/category/article/ (this would be a correct structure for the site). GWT reports: http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl. This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods. The site is using a htacess file with redirects and a rewrite condition. Rewrite Condition: Need to redirect when no trailing slash RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | baldnut
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301] The above condition forces the trailing slash on folders. Then we are using redirects in this manner: Redirect 301 /article.html http://www.domain.com/article/ In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so: <ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s] RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule> Is there anything that I have done that would cause this type of redirect 'loop' ? Any help greatly appreciated.\0 -
HTTP Status showing up in opensiteexplorer top pages as blocked by robot.txt file
I am trying to find an answer to this question it has alot of url on this page with no data when i go into the data source and search for noindex or robot.txt but the site is visible in the search engines ?
Technical SEO | | ReSEOlve0 -
Custom Permalinks (aka alias') - does it look spammy to googlebot?
I am moving my whole site over to wordpress (150+pgs). In the process I assigned pages to appropriate parent pages via "page attributes". I was really excited about this. I like how it organizes everything in the pages dashboard. I also think that the sitemap that comes with my theme can create something really great for visitors with this info. What I realized after doing that is that it changed my url to include the parent page. Basically, the url is now "domain.com/parent-page/child-page.html". This is rather disasterous because the url's of these newly created child pages on my old site are simple "domain.com/child-page". Not that they're defined as parent or child pages on my existing dreamweaver/html site... but you know what I mean - Right?! I got a plugin called "Permalink Editor" to let me customize the url. So, I went through all of the child pages and got rid of the parent page in the url. Then when I woke up this morning I realized that what I've created is a "permalink alias". That sounds a little bit scary to me. Perhaps like google could consider it spam and like I'm trying to "sculpt link flow". I'm not... I'm just trying to recreate my site as it is in wordpress. I want the site to be exactly the same in terms of the url's. But, I want the many benefit's of wordpress' CMS. Should I go an unassign all of the parent/child pages in the "Page Attributes". Or, am I being paranoid and should I leave it as is? fyi - this is the first page that came up with I searched for permalink alias. It looks kind of black-hatty to me?!
Technical SEO | | nsjadmin
- http://www.seodesignsolutions.com/blog/wordpress-seo/seo-ultimate-4-7/ Thanks so much. I look forward to a response!0 -
Does Blog Comments (Links) are worthy now days ?
Dear Moz Members, I hope you all are doing well, I just need to clarify my dough. Does blog comments (Links) are worthy now days ? Why do much of the seo company mention in there off page strategies about Blog comments ? Getting links from high pr blog comments, does boost your search engine ranking. If not then why max num of seo experts perform blog comments ? If blog comments links are not worthy then why many of the reputed company, Follow these step ? Regards & Thanks, Chhatarpal Singh
Technical SEO | | chhatarpal0 -
Development site accidentally got indexed and now appears in SERPs. How to fix?
I work at a design firm, and we just redesigned a website for a client. When it came time for the coding, we initially built a development site to work out all the kinks before going live. Then we relaunched the actual site about a week ago. Here's the problem: Somehow, the developer who coded the site for us (a freelancer) allowed the development site to be indexed by Google. Now, when you enter the client's name into Google, the development site appears higher in the results pages than the real site! In fact, the real site isn't even in the top 50 search results. The client is understandably angry about this for multiple reasons. We quickly added a robots.txt file to the development site and a 301 redirect to the real site. However, that did seemed to have no effect on the problem. Any ideas on how to fix this mess? Thank you in advance!
Technical SEO | | matt-145670 -
Blocking other engines in robots.txt
If your primary target of business is not in China is their any benefit to blocking Chinese search robots in robots.txt?
Technical SEO | | Romancing0