Rogerbot Ignoring Robots.txt?
-
Hi guys,
We're trying to block Rogerbot from spending 8000-9000 of our 10000 pages per week for our site crawl on our zillions of PhotoGallery.asp pages. Unfortunately our e-commerce CMS isn't tremendously flexible so the only way we believe we can block rogerbot is in our robots.txt file.
Rogerbot keeps crawling all these PhotoGallery.asp pages so it's making our crawl diagnostics really useless.
I've contacted the SEOMoz support staff and they claim the problem is on our side. This is the robots.txt we are using:
User-agent: rogerbot
Disallow:/PhotoGallery.asp
Disallow:/pindex.asp
Disallow:/help.asp
Disallow:/kb.asp
Disallow:/ReviewNew.asp
User-agent: *
Disallow:/cgi-bin/
Disallow:/myaccount.asp
Disallow:/WishList.asp
Disallow:/CFreeDiamondSearch.asp
Disallow:/DiamondDetails.asp
Disallow:/ShoppingCart.asp
Disallow:/one-page-checkout.asp
Sitemap: http://store.jrdunn.com/sitemap.xml
For some reason the Wysiwyg edit is entering extra spaces but those are all single spaced.
Any suggestions? The only other thing I thought of to try is to something like "Disallow:/PhotoGallery.asp*" with a wildcard.
-
I have just encountered an interesting thing about Moz Link Search and its bot: if you do a search for Domains linking to Google.com , you find a list of about 900 000 domains, among which I was surprised to find webcache.googleusercontent.com
See the proof below in attache screen shot.
At the same time, the webcache.googleusercontent.com policy for robots is as shown in the second attachment.
In my opinion, there is only one possible explanation: Moz Bot does ignore robots.txt files...
-
Thanks Cyrus,
No, for some reason the editor double-spaced the file when I pasted. Other than that, it's the same though.
Yes, I actually tried ordering the exclusions both ways. Neither works.
The robots.txt checkers report no errors. I had actually checked them before posting.
Before I posted this, I was pretty convinced the problem wasn't in our robots.txt but the Seomoz support staff says essentially, "We don't think the problem is with Rogerbot, so it must be in your robots.txt file, but we can't look at that, so if by some chance your robots.txt file is fine, then there's nothing we can do for you because we're just going to assume the problem is on your side."
I figured, with everything I've already tried, and if the fabulous SEOMoz community can't come up with a solution, that'll be the best I can do.
-
Hi Kelly,
Thanks for letting us know. Could be a couple of things right off the bat. Is this your exact robots.txt file? If so, it's missing some formatting like proper spacing to be perfectly compliant. You can run a check of your robots.txt file at serveral places.
http://tool.motoricerca.info/robots-checker.phtml
http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php
http://www.sxw.org.uk/computing/robots/check.html
Also, it's generally a good idea to put specific inclusions towards the bottom, so I might flip the order and put the rogerbot directives last and the User-agent: * first.
Hope this helps. Let us know if any of this points in the right direction.
-
Thanks so much for the tip. Unfortunately still unsuccessful. (shrug)
-
Try
Disallow: /PhotoGallery.asp
I put wild cards all over usually just to be sure and had no issues so far.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
When Should I Ignore Moz's Report Canonical Missing?
I'm dealing with an eCommerce website which has a category, subcategory, products. Moz is showing all of these and the individual products as missing a canonical. The site is very thin on content at the moment, but all the pages are clearly different, and I don't see why they need a canonical unless this is some rule that eCommerce sites have to follow. Should I ignore Moz's missing canonical report? My understanding is if the product appears in multiple categories, then a canonical should be put in place to the product. Any advice would be appreciated. Christina
Moz Pro | | ChristinaRadisic0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
The pages that add robots as noindex will Crawl and marked as duplicate page content on seo moz ?
When we marked a page as noindex with robots like {<meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noindex</a>" />} will crawl and marked as duplicate page content(Its already a duplicate page content within the site. ie, Two links pointing to the same page).So we are mentioning both the links no need to index on SE.But after we made this and crawl reports have no change like it tooks the duplicate with noindex marked pages too. Please help to solve this problem.
Moz Pro | | trixmediainc0 -
Does Rogerbot recognize rel="alternate" hreflang="x"?
Rogerbot just completed its first crawl and is reporting all kinds of duplicate content - both page content and meta title/description. The pages it is calling duplicate are used with rel="alternate" hreflang="x", but are still being labeled as dupes. The title and descriptions are usually exactly the same, so I am working on getting at least those translated into different languages. I think its getting tripped up because the product page its crawling are only in English, but the chrome of the site is in the translated languages. The URLs look like so: Original: site.com/product Detected duplicates: site.com/fr/product, site.com/de/product, site.com/zh-hans/product
Moz Pro | | sedwards0 -
Rogerbot getting cheeky?
Hi SeoMoz, From time to time my server crashes during Rogerbot's crawling escapades, even though I have a robots.txt file with a crawl-delay 10, now just increased to 20. I looked at the Apache log and noticed Roger hitting me from from 4 different addresses 216.244.72.3, 72.11, 72.12 and 216.176.191.201, and most times whilst on each separate address, it was 10 seconds apart, ALL 4 addresses would hit 4 different pages simultaneously (example 2). At other times, it wasn't respecting robots.txt at all (see example 1 below). I wouldn't call this situation 'respecting the crawl-delay' entry in robots.txt as other question answered here by you have stated. 4 simultaneous page requests within 1 sec from Rogerbot is not what should be happening IMHO. example 1
Moz Pro | | BM7
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage1.html" 200 77813
216.244.72.12 - - [05/Sep/2012:15:54:27 +1000] "GET /store/product-info.php?mypage2.html HTTP/1.1" 200 74058
216.244.72.12 - - [05/Sep/2012:15:54:28 +1000] "GET /store/product-info.php?mypage3.html HTTP/1.1" 200 69772
216.244.72.12 - - [05/Sep/2012:15:54:37 +1000] "GET /store/product-info.php?mypage4.html HTTP/1.1" 200 82441 example 2
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage1.html HTTP/1.1" 200 70209
216.244.72.11 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage2.html HTTP/1.1" 200 82384
216.244.72.12 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage3.html HTTP/1.1" 200 83683
216.244.72.3 - - [05/Sep/2012:15:46:15 +1000] "GET /store/mypage4.html HTTP/1.1" 200 82431
216.244.72.3 - - [05/Sep/2012:15:46:16 +1000] "GET /store/mypage5.html HTTP/1.1" 200 82855
216.176.191.201 - - [05/Sep/2012:15:46:26 +1000] "GET /store/mypage6.html HTTP/1.1" 200 75659 Please advise.1 -
How to get rogerbot whitelisted for application firewalls.
we have recently installed an application firewall that is blocking rogerbot from crawling our site. Our IT department has asked for an IP address or range of IP addresses to add to the acceptable crawlers. If rogerbot has a dynamic IP address how to we get him added to our whitelist? The product IT is using is from F5 called Application Security Manager.
Moz Pro | | Shawn_Huber0 -
Does the SEOMoz weekly crawl that highlights no meta description tag, take into account if there is a meta robots noindex,follow tag on the pages it indicates the missing meta descriptions?
The weekly crawl website report is telling me that there are pages that have missing meta description tags, yet I've implemented meta robots tags to 'noindex, follow' those pages which are visible in those page source files. As far as Google Is concerned, surely this then won't be a problem since it is being instructed NOT to consider these specific pages for indexing. I am assuming that the weekly SEOmoz website crawl is simply throwing the missing meta description crawl findings into its report without itself observing that the particluar URL references contain the meta robots 'noindex,follow' tag ???? Appreciate if you can clairfy if this is the case. It would help me understand that (at least in terms of my efforts towards Google) your own crawl doesn't observe the meta robots tag instruction, hence the resultant report's flagging the discrepancy.
Moz Pro | | callassist0