Rogerbot Ignoring Robots.txt?
-
Hi guys,
We're trying to block Rogerbot from spending 8000-9000 of our 10000 pages per week for our site crawl on our zillions of PhotoGallery.asp pages. Unfortunately our e-commerce CMS isn't tremendously flexible so the only way we believe we can block rogerbot is in our robots.txt file.
Rogerbot keeps crawling all these PhotoGallery.asp pages so it's making our crawl diagnostics really useless.
I've contacted the SEOMoz support staff and they claim the problem is on our side. This is the robots.txt we are using:
User-agent: rogerbot
Disallow:/PhotoGallery.asp
Disallow:/pindex.asp
Disallow:/help.asp
Disallow:/kb.asp
Disallow:/ReviewNew.asp
User-agent: *
Disallow:/cgi-bin/
Disallow:/myaccount.asp
Disallow:/WishList.asp
Disallow:/CFreeDiamondSearch.asp
Disallow:/DiamondDetails.asp
Disallow:/ShoppingCart.asp
Disallow:/one-page-checkout.asp
Sitemap: http://store.jrdunn.com/sitemap.xml
For some reason the Wysiwyg edit is entering extra spaces but those are all single spaced.
Any suggestions? The only other thing I thought of to try is to something like "Disallow:/PhotoGallery.asp*" with a wildcard.
-
I have just encountered an interesting thing about Moz Link Search and its bot: if you do a search for Domains linking to Google.com , you find a list of about 900 000 domains, among which I was surprised to find webcache.googleusercontent.com
See the proof below in attache screen shot.
At the same time, the webcache.googleusercontent.com policy for robots is as shown in the second attachment.
In my opinion, there is only one possible explanation: Moz Bot does ignore robots.txt files...
-
Thanks Cyrus,
No, for some reason the editor double-spaced the file when I pasted. Other than that, it's the same though.
Yes, I actually tried ordering the exclusions both ways. Neither works.
The robots.txt checkers report no errors. I had actually checked them before posting.
Before I posted this, I was pretty convinced the problem wasn't in our robots.txt but the Seomoz support staff says essentially, "We don't think the problem is with Rogerbot, so it must be in your robots.txt file, but we can't look at that, so if by some chance your robots.txt file is fine, then there's nothing we can do for you because we're just going to assume the problem is on your side."
I figured, with everything I've already tried, and if the fabulous SEOMoz community can't come up with a solution, that'll be the best I can do.
-
Hi Kelly,
Thanks for letting us know. Could be a couple of things right off the bat. Is this your exact robots.txt file? If so, it's missing some formatting like proper spacing to be perfectly compliant. You can run a check of your robots.txt file at serveral places.
http://tool.motoricerca.info/robots-checker.phtml
http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php
http://www.sxw.org.uk/computing/robots/check.html
Also, it's generally a good idea to put specific inclusions towards the bottom, so I might flip the order and put the rogerbot directives last and the User-agent: * first.
Hope this helps. Let us know if any of this points in the right direction.
-
Thanks so much for the tip. Unfortunately still unsuccessful. (shrug)
-
Try
Disallow: /PhotoGallery.asp
I put wild cards all over usually just to be sure and had no issues so far.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Rogerbot blocked by cloudflare and not display full user agent string.
Hi, We're trying to get MOZ to crawl our site, but when we Create Your Campaign we get the error:
Moz Pro | | BB_NPG
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct. If the issue persists, check out this article for further help. robot.txt is fine and we actually see cloudflare is blocking it with block fight mode. We've added in some rules to allow rogerbot but these seem to be getting ignored. If we use a robot.txt test tool (https://technicalseo.com/tools/robots-txt/) with rogerbot as the user agent this get through fine and we can see our rule has allowed it. When viewing the cloudflare activity log (attached) it seems the Create Your Campaign is trying to crawl the site with the user agent as simply set as rogerbot 1.2 but the robot.txt testing tool uses the full user agent string rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+shiny@moz.com) albeit it's version 1.0. So seems as if cloudflare doesn't like the simple user agent. So is it correct the when MOZ is trying to crawl the site it uses the simple string of just rogerbot 1.2 now ? Thanks
Ben Cloudflare activity log, showing differences in user agent strings
2022-07-01_13-05-59.png0 -
Allow only Rogerbot, not googlebot nor undesired access
I'm in the middle of site development and wanted to start crawling my site with Rogerbot, but avoid googlebot or similar to crawl it. Actually mi site is protected with login (basic Joomla offline site, user and password required) so I thought that a good solution would be to remove that limitation and use .htaccess to protect with password for all users, except Rogerbot. Reading here and there, it seems that practice is not very recommended as it could lead to security holes - any other user could see allowed agents and emulate them. Ok, maybe it's necessary to be a hacker/cracker to get that info - or experienced developer - but was not able to get a clear information how to proceed in a secure way. The other solution was to continue using Joomla's access limitation for all, again, except Rogerbot. Still not sure how possible would that be. Mostly, my question is, how do you work on your site before wanting to be indexed from Google or similar, independently if you use or not some CMS? Is there some other way to perform it?
Moz Pro | | MilosMilcom
I would love to have my site ready and crawled before launching it and avoid fixing issues afterwards... Thanks in advance.0 -
Moz campaign works around my robots.txt settings
My robots.txt file looks like this: User-agent: * Disallow: /*? Disallow: /search So, it should block (deindex) all dynamic URLs. If I check this url in Google: site:http://www.webdesign.org/search/page-1.html?author=47 Google tells me: A description for this result is not available because of this site's robots.txt – learn more. So far so good. Now, I ran a Moz SEO campaign and I got a bunch of duplicate page content errors. One of the links is this one: http://www.webdesign.org/search/page-1.html?author=47 (the same I tested in Google and it told me that the page is blocked by robots.txt which I want) So, it makes me think that Moz campaigns check files regardless of what robots.txt say? It’s my understanding User-agent: * should forbid Rogerbot from crawling as well. Am I missing something?
Moz Pro | | VinceWicks0 -
Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
I'm curious as to how accurately rogerbot replicates google's searchbot I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing. I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors. Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
Moz Pro | | KJDMedia0 -
Do the SEOmoz Campaign Reports follow Robots.txt?
Hello, Do the SEOmoz Campaign Reports (that track errors and warnings for a website) follow rules I write in the robots.txt file? I've done all that I can to fix the legitimate errors with my website, as reported by the fabulous SEOmoz tools. I want to clean up my pages indexed with the search engines so I've written a few rules to exclude content from Wordpress tag URLs for instance. Will my campaign report errors and warnings also drop as a result of this?
Moz Pro | | Flexcin0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
How to get rogerbot whitelisted for application firewalls.
we have recently installed an application firewall that is blocking rogerbot from crawling our site. Our IT department has asked for an IP address or range of IP addresses to add to the acceptable crawlers. If rogerbot has a dynamic IP address how to we get him added to our whitelist? The product IT is using is from F5 called Application Security Manager.
Moz Pro | | Shawn_Huber0 -
What's name of SEOmoz and Open Site Explorer robots?!
I would like to exclude in robots.txt SEOmoz and Open Site Explorer bots to don't let them index my sites… what's their names?
Moz Pro | | cezarylech0