Rogerbot getting cheeky?
-
Hi SeoMoz,
From time to time my server crashes during Rogerbot's crawling escapades, even though I have a robots.txt file with a crawl-delay 10, now just increased to 20.
I looked at the Apache log and noticed Roger hitting me from from 4 different addresses 216.244.72.3, 72.11, 72.12 and 216.176.191.201, and most times whilst on each separate address, it was 10 seconds apart, ALL 4 addresses would hit 4 different pages simultaneously (example 2). At other times, it wasn't respecting robots.txt at all (see example 1 below).
I wouldn't call this situation 'respecting the crawl-delay' entry in robots.txt as other question answered here by you have stated. 4 simultaneous page requests within 1 sec from Rogerbot is not what should be happening IMHO.
example 1
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage1.html" 200 77813
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage2.html HTTP/1.1" 200 74058
216.244.72.12 - - [05/Sep/2012:15:54:28 +1000] "GET /store/product-info.php?mypage3.html HTTP/1.1" 200 69772
216.244.72.12 - - [05/Sep/2012:15:54:37 +1000] "GET /store/product-info.php?mypage4.html HTTP/1.1" 200 82441example 2
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage1.html HTTP/1.1" 200 70209
216.244.72.11 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage2.html HTTP/1.1" 200 82384
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage3.html HTTP/1.1" 200 83683
216.244.72.3 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage4.html HTTP/1.1" 200 82431
216.244.72.3 - - [05/Sep/2012:15:46:16 +1000] "GET /store/mypage5.html HTTP/1.1" 200 82855
216.176.191.201 - - [05/Sep/2012:15:46:26 +1000] "GET /store/mypage6.html HTTP/1.1" 200 75659Please advise.
-
Hi BM7,
I'm going to open up a ticket on this to have our engineers take a closer look at your site. Once we have an overall response, I'll post it here for other community members to view.
Cheers!
-
Thanks Megan for your reply,
Will give that a try and have blocked 2 addresses so you are reduced to 2 crawler sessions. These two measures should reduce the load considerably as long as Rogerbot respects the 7 second delay.
IMHO ignoring the Crawl-Delay set by the webmaster of the site you are crawling, which crawlers are supposed to respect, is wrong. I got a Google WMT nasty for being down 5 hours due to Rogerbot as it was the middle of the night so only got restarted in the morning.
Also, my site has around 600 discrete pages of which you crawl about 500, so even at the original 10 seconds crawl delay you could do my whole site in less than 1.5 hours, which is only required once a week. So in my mind that suggests there is no need to overrule my settings in robots.txt 'so he (Roger) can complete the crawl'.
Regards,
-
Hi there,
This is Megan from the SEOmoz Help Team. I'm so sorry Rogerbot is causing you grief! This actually might be happening because your crawl delay is too long, so rogerbot just ends up ignoring it so he can complete the crawl. If you set your crawl delay to a max of 7, then it should solve your problem. If you're still running into issues, though, please send us a message to help@seomoz.org and we'll check it out asap!
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone offer an example of a site or page that gets 100% or even close to that on Search Visibility?
I have a couple of sites that I manage that kill it in the SERPs and yet they get a low search visibility score from Moz Pro, I am talking 18%-19%, and another that ranks well has a search visibility score of 8.71%. I know there are factors that go into calculating the score, I am just curious if anyone is really up there.
Moz Pro | | -b.graves- 00 -
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it?
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it? I copy and pasted this, from my Links page in my campaign because I can't seem to grasp what a root domain is: 'A higher number of good quality linking root domains improves a page's ranking potential'. Can some one explain to me what this is. As simply as possible. Here's my site www.Thumannagency.com Thanks in advance:)
Moz Pro | | MissThumann0 -
Why did I not get on page analysis
I signed up last Friday for a 30 day trial and was hoping to get the on page analysis so I could get cracking on it. Moz said the crawl was done but I have no info for on page analysis which I find really useful when I have used this before. Any ideas and are any of the MOZ moderators able to get the system to perform one so I can make changes tonight for the MOZ crawl again tomorrow? Thanks Paul
Moz Pro | | ptrobson0 -
Getting warning message when attach facebook account with my campaign
Hi I am getting warning message "Our access to this account will expire in 1¼ hours. Please reauthorize now to ensure that we can continue to collect data for this account." When i attach feacbook account with in campaign and social tab of seomoz application. can you tell me what is wrong here? Thank you.
Moz Pro | | Webworld_Norway0 -
My account recently got deactivated due to billing issues. I reactivated my account, but all of my campaigns are gone. Is there a way I can get back my campaigns without having to redo all of them??
My account recently got deactivated due to billing issues. I reactivated my account, but all of my campaigns are gone. Is there a way I can get back my campaigns without having to redo all of them??
Moz Pro | | DorianDDR10 -
URLs getting re-directed to double http:// URLs
The "Notices" section under "Crawl Diagnostics" shows that there are 435 issues on my website. I checked out a few URLs to verify this issue and found that most of these pages are working perfectly. For instance, the above mentioned report shows that http://policycomplaints.com/about redirects to http://http://policycomplaints.com/about/ . Then, http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ redirects to http://http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ . However, when I open these pages, they seem to be working perfectly. I didn't find them getting re-directed to somewhere else. So, as per the report, it seems that all of these 435 http://URLs are getting re-directed to http://http://URL versions which in reality is not true because all the http://URLs are working perfectly. So, is this a problem with SEOmoz software? If not, what is the reason for these issues and how can I adddress them. Do notify if any further information is required for the same. Thanks. bNiEm.png
Moz Pro | | unknownID10 -
How do I get my crawl report?
I received a message that my crawl report is complete with a link - went to the link however - when I click on the icon that has the report name and the complete check mark nothing happens looked around can't find the results. Need to bid on this job so it would be helpful to know where to get it. Thanks for all you do. Wickey
Moz Pro | | Wickey0