Rogerbot getting cheeky?
-
Hi SeoMoz,
From time to time my server crashes during Rogerbot's crawling escapades, even though I have a robots.txt file with a crawl-delay 10, now just increased to 20.
I looked at the Apache log and noticed Roger hitting me from from 4 different addresses 216.244.72.3, 72.11, 72.12 and 216.176.191.201, and most times whilst on each separate address, it was 10 seconds apart, ALL 4 addresses would hit 4 different pages simultaneously (example 2). At other times, it wasn't respecting robots.txt at all (see example 1 below).
I wouldn't call this situation 'respecting the crawl-delay' entry in robots.txt as other question answered here by you have stated. 4 simultaneous page requests within 1 sec from Rogerbot is not what should be happening IMHO.
example 1
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage1.html" 200 77813
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage2.html HTTP/1.1" 200 74058
216.244.72.12 - - [05/Sep/2012:15:54:28 +1000] "GET /store/product-info.php?mypage3.html HTTP/1.1" 200 69772
216.244.72.12 - - [05/Sep/2012:15:54:37 +1000] "GET /store/product-info.php?mypage4.html HTTP/1.1" 200 82441example 2
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage1.html HTTP/1.1" 200 70209
216.244.72.11 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage2.html HTTP/1.1" 200 82384
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage3.html HTTP/1.1" 200 83683
216.244.72.3 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage4.html HTTP/1.1" 200 82431
216.244.72.3 - - [05/Sep/2012:15:46:16 +1000] "GET /store/mypage5.html HTTP/1.1" 200 82855
216.176.191.201 - - [05/Sep/2012:15:46:26 +1000] "GET /store/mypage6.html HTTP/1.1" 200 75659Please advise.
-
Hi BM7,
I'm going to open up a ticket on this to have our engineers take a closer look at your site. Once we have an overall response, I'll post it here for other community members to view.
Cheers!
-
Thanks Megan for your reply,
Will give that a try and have blocked 2 addresses so you are reduced to 2 crawler sessions. These two measures should reduce the load considerably as long as Rogerbot respects the 7 second delay.
IMHO ignoring the Crawl-Delay set by the webmaster of the site you are crawling, which crawlers are supposed to respect, is wrong. I got a Google WMT nasty for being down 5 hours due to Rogerbot as it was the middle of the night so only got restarted in the morning.
Also, my site has around 600 discrete pages of which you crawl about 500, so even at the original 10 seconds crawl delay you could do my whole site in less than 1.5 hours, which is only required once a week. So in my mind that suggests there is no need to overrule my settings in robots.txt 'so he (Roger) can complete the crawl'.
Regards,
-
Hi there,
This is Megan from the SEOmoz Help Team. I'm so sorry Rogerbot is causing you grief! This actually might be happening because your crawl delay is too long, so rogerbot just ends up ignoring it so he can complete the crawl. If you set your crawl delay to a max of 7, then it should solve your problem. If you're still running into issues, though, please send us a message to help@seomoz.org and we'll check it out asap!
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Way To Write H1 to Get Max CTR?
Can anyone please share some guidelines regarding how to write the H1 subject of the post which can generate the maximum CTR.?. I noticed even when i able to rank on the featured snippet with good enough search volume the CTR is too low. Anyone share some expert advice please?
Moz Pro | | ndhsne450 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
AM I the only one getting misleading titles in OSE?
I am trying to locate directories in my competitor's links using OSE. Here is the workflow I am using: Filter all results to external sites only, group by site, linking to any page on the domain. Export results to csv. My competitor is in the web design industry. So I try to filter the titles of the pages linking to the competitor to look for titles containing directory. But when I click on the link for "Windsor Internet Web Design Hosting Ontario Canada Directory" I get a page with the title "Kitchen & Bathroom Showroom | London Ontario | Bathroom Vanity Showroom" Are the results really this misleading?? or am I doing something wrong here? Any insight or help would be greatly appreciated.
Moz Pro | | tdlabs0 -
I need to get a page in the top 3 Google results for my keyword "teaching jobs" but am struggling to do so! Can anyone help?
I'm trying to get this page http://www.eteach.com/teaching-jobs to rank as the top search result on Google with the keyword "teaching jobs" but it seems to be number 5 in the results! My competitors are totally kicking my arse on getting this page to be above my website. I've got the keywords in there, I have the right content and I have links, what more can I do to make it rank as number 1! Help please!! If anyone has an SEO check list of things I need to make sure I do on my pages for them to rank in the top 3 results then that would be really handy!
Moz Pro | | Eteach_Marketing0 -
I'm trying to get 'tigi bed head' up most of all...
I'm 87th ish with this term and I don't know why?! crap result I know. With every other phrase I use 'cheap tigi bed head' 'buy tigi bed head online' etc etc, we are on the first page all day long, pls help this worthy cause? I am www.thehairroom.co.uk, free hair products for the best results. Thank You.
Moz Pro | | smoki6660 -
Rank page1 but not getting any clicks !!
Hi everyone, I am on page #1 position #2 with my keyword but doesnt get any clicks !I desperatly need your help. Here are some info about my site. what do you think the problem is? Thanks for your help. -My keyword's Global and Local montly search is 1300 (exact) -Seomoz Rank Tracker shows that I rank ( on Page #1, Position #2 in Google / United Kingdom) -I use always private browsing to check my rankings -my domain is a .com and I bought the domain name from godaddy -Hosting is 1&1 and their server is in Germany. Which is a shame, I ve just realized 😞 -My site ranks on Google.uk (The web) but doesnt rank Google.co.uk (pages from uk). Is this the problem? I ve just change the target country to United Kingdom using webmaster tool. Will it help? Thanks a lot
Moz Pro | | Jorenr0 -
How do I get a link in my blog bio signature on my Youmoz posts?
I've done several Youmoz posts and only recently noticed for some reason that other authors have included author links and links to their websites in their bio section at the end. I edited mine with a full URL but it only shows up as text. No editor there when you edit your profile so no way of adding a link. Anyone know how this is done?
Moz Pro | | DanDeceuster0 -
What is the quickest way to get OSE data for many URLs all at once?
I have over 400 URLs in a spreadsheet and I would like to get Open Site Explorer data (domain/page authority/trust etc) for each URL. Would I use the Linkscape API to do this quickly (ie not manually entering every single site into OSE)? Or is there something in OSE or a tool I am overlooking? And whatever the best process is, can you give a brief overview? Thanks!! -Dan
Moz Pro | | evolvingSEO0