Googlebot Crawl Rate causing site slowdown
-
I am hearing from my IT department that Googlebot is causing as massive slowdown/crash our site. We get 3.5 to 4 million pageviews a month and add 70-100 new articles on the website each day. We provide daily stock research and marke analysis, so its all high quality relevant content. Here are the crawl stats from WMT:
I have not worked with a lot of high volume high traffic sites before, but these crawl stats do not seem to be out of line. My team is getting pressure from the sysadmins to slow down the crawl rate, or block some or all of the site from GoogleBot.
Do these crawl stats seem in line with sites? Would slowing down crawl rates have a big effect on rankings?
Thanks
-
Similar to Michael, my IT team is saying Googlebot is causing performance issues - specifically during peak hours.
It was suggested that we consider using apache re-write rules to serve Googlebot a 503 during our peak hours to limit the impact. I found the stackoverflow thread (link below) in which John Muller seems to suggest this approach, but has anyone tried this?
-
Blocking googlebot is a quick and easy way to disappear from the Index. Not an option if you want Google to rank your site.
For smaller sites or ones with limited technologies, I sometimes recommend using a crawl-delay directive in robots.txt
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
But I agree with both Shane and Zachary, this doesn't seem like the long term answer to your problems. Your crawl stats don't seem out of line for a site of your size, and perhaps a better hardware configuration could help things out.
With 70 new articles each day, I'd want Google crawling my site as much as they pleased.
-
whatever Google's default is in GWT - It sets it for you.
You can change it, but it is not reccomended unless for a specific reason (such as Michael Lewis's specific scenario) even though, I am not completely sold that Gbot is what is causing the "dealbreaking" overhead.
-
what is the ideal setting on the crawler. i have been wondering about this for some time.
-
Hi,
Your admins saying that, is like someone saying "we need to shut the site down, we are getting to much traffic!" Common sys-admin response (fix it somewhere else)
4GB a day downloaded, is alot of Bot traffic, but it appears you are a "real time" site, that is probably actually helped and maybe even reliant on your high crawl rate....
I would upgrade hardware - or even look into some kind of off site cloud redundancy for failover (Hybrid)
I highly doubt that 4GB a day, is a "dealbreaker",but of course that is just based off the one image, and your admins probably have resource monitors - Maybe Varnish is an answer for static content to help lighten load???? Or CDN for file hosting to lighten bandwidth load?
Shane
-
We are hosting the site on our own hardware at a big colo. I know that we are upgrading servers but they will not be online until the end of July.
Thanks!
-
I wouldn't slow the crawl rate. A high crawl rate is good so that Google can keep their index of your website current.
The better solution is to reconsider your hardware and networking setup. Do you know how you are being hosted? From my own experience with a website of that size, a load balancer on two decent dedicated servers should handle the load without problems. Google crawling your pages shouldn't create noticeable overhead on the right setup.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
PWA for Desktop Site (Ecommerce)
Hi Folks, Need guidance about using PWA on desktop site. As I know PWA is basically used for mobile site to engage visitor more and let them surf your site like an app. Would it be good SEO practice to use PWA on desktop site(E-commerce site) by calling everything through Javascript and let google Crawler cache only site logo and Hide everything else?
Technical SEO | | Rajesh.Prajapati1 -
Increase in Crawl Errors
I had a problem with a lot of crawl errors (on Google Search Console) a while back, due to the removal of a shopping cart. I thought I'd dealt with this & Google seemed to agree (see attached pic), but now they're all back with a vengeance! The crawl errors are all the old shop pages that I thought I'd made clear weren't there anymore. The sitemaps (using Yoast on Wordpress to generate these) all updated 16 Aug but the increase didn't happen till 18-20. How do I make it clear to Google that these pages are gone forever? Screen-Shot-2016-08-22-at-10.19.05.png
Technical SEO | | abisti20 -
Can anyone tell me why some of the top referrers to my site are porn site?
We noticed today that 4 of the top referring sites are actually porn sites. Does anyone know what that is all about? Thanks!
Technical SEO | | thinkcreativegroup1 -
Can Google Crawl This Page?
I'm going to have to post the page in question which i'd rather not do but I have permission from the client to do so. Question: A recruitment client of mine had their website build on a proprietary platform by a so-called recruitment specialist agency. Unfortunately the site is not performing well in the organic listings. I believe the culprit is this page and others like it: http://www.prospect-health.com/Jobs/?st=0&o3=973&s=1&o4=1215&sortdir=desc&displayinstance=Advanced Search_Site1&pagesize=50000&page=1&o1=255&sortby=CreationDate&o2=260&ij=0 Basically as soon as you deviate from the top level pages you land on pages that have database-query URLs like this one. My take on it is that Google cannot crawl these pages and is therefore having trouble picking up all of the job listings. I have taken some measures to combat this and obviously we have an xml sitemap in place but it seems the pages that Google finds via the XML feed are not performing because there is no obvious flow of 'link juice' to them. There are a number of latest jobs listed on top level pages like this one: http://www.prospect-health.com/optometry-jobs and when they are picked up they perform Ok in the SERPs, which is the biggest clue to the problem outlined above. The agency in question have an SEO department who dispute the problem and their proposed solution is to create more content and build more links (genius!). Just looking for some clarification from you guys if you don't mind?
Technical SEO | | shr1090 -
Is a micro site the way to go?
Hello, a client has asked us today to quote for how much it would cost them to get a micro site built. A Google employee has told them that because their current URL doesn't include .co.uk or.com it is simply: brandname.word that it will be harder for them to get their website to rank. My understanding is that micro sites aren't a good solution for any problem as Google doesn't like them. Would it be better for them to buy a .co.uk (they are a UK company) url and then redirect the url to their current website or is there a better solution? Many thanks
Technical SEO | | mblsolutions0 -
Crawl issues
Hello there, I have found that when crawling my site I have errors regarding the meta description and it says it is missing from few pages. I checked these pages but there is a meta description. I also ran the same report with other tools and it comes up the same issues. What should I do?
Technical SEO | | PremioOscar0 -
Mobile site content and main site content
Help, pls! I have one main site and a mobile version of that site (m.domain.com). The main site has more pages, more content, different named urls. The main site has consistently done well in Google. The mobile site has not: the mobile site is buried. I am working on adding more content to the mobile site, but am concerned about duplicate content. Could someone pls tell me the best way to deal with these two versions of our site? I can't use rel=canonical because the urls do not correspond to the same names on the main site, or can I? Does this mean I need to change the url names, offer different content (abridged), etc? I really am at a loss as to how to interpret Google's rules for this. Could someone please tell me what I am doing wrong? Any help or tips would GREATLY appreciated!!!!! Thanks!
Technical SEO | | lfrazer0 -
At what point is the canonical tag crawled
Do search engines (specifically Google) crawl the url in the canonical tag as it loads or do they load the whole page before crawling it? Thanks,
Technical SEO | | ao.com0