612 : Page banned by error response for robots.txt
-
Hi all,
I ran a crawl on my site https://www.drbillsukala.com.au and received the following error "612 : Page banned by error response for robots.txt."Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue.
I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culpritOne thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol.
I'd welcome any feedback you might have. Thanks in advance for your help.
Kind regards -
Hey there! Tawny from Moz's Help Team here.
After doing some quick searching, it looks like how you configure the rules for WAFs depends on what service you're using to host those firewalls. You may need to speak to their support team to ask how to configure things to allow our user-agents.
Sorry I can't be more help here! If you still have questions we can help with, feel free to reach out to us at help@moz.com and we'll do our best to assist you.
-
Hi, I am having the same issue.
Can you please tell me how you have created rule in Web Application Firewall to allow user agents rogerbot and dotbot.
Thanks!!
-
Hi Federico,
Thanks for the prompt. Yes, this solution worked. I'm hopeful that this thread helps others too because when I was troubleshooting the problem, the other threads were not helpful for my particular situation.
Cheers
-
Hi, did the solution work?
-
Hi Federico,
I think I have found the solution for this problem and am hopeful the crawl will be successful this time around. Based on further digging and speaking to the team at StackPath CDN, I have done the following:
- I added the following to my robots.txt file
User-agent: rogerbot
Disallow:User-agent: dotbot
Disallow:- I added a custom robots.txt file in my CDN which includes the above and then created a rule in my Web Application Firewall which allows user agents rogerbot and dotbot.
I'll let you know if the crawl was successful or not.
Kind regards
-
Thanks for your response Federico. I have checked my robots.txt tester in my Google Search Console and it said "allowed."
Oddly, it also happened on another site of mine that I'm also running through StackPath CDN with a web application firewall in place. This makes me wonder if perhaps the CDN/WAF are the culprits (?).
I'll keep poking around to see what I find.
Cheers -
Seems like an issue with the Moz crawler, as the robots.txt has no issues and the site loads just fine.
If you already tested your robots.txt using the Google Webmaster Tools "robots.txt Tester" just to be sure, then you should contact Moz here: https://moz.com/help/contact/pro
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a way to download a report showing all meta descriptions for our web pages?
I see how to look at data for web pages with meta descriptions that have been flagged as being less than optimal. Is there a way to do a complete download of the meta descriptions on all our pages? (good meta as well as not so good) Thanks! Lydia
Link Explorer | | Lifespan-Moz0 -
Why are recently deleted pages still appearing in the latest MOZ crawl?
Newbie, so please forgive!! OK, so I'm doing my 1st site optimization. It is reporting errors from pages that were deleted a couple of days ago. And I JUST signed up today. Where is this info coming from? Thanks, Billy
Link Explorer | | NewSEOguy0 -
Crawl Errors on a Wordpress Website
I am getting a 902 error, "Network Errors Prevented Crawler from Contacting Server" when requesting a site crawl on my wordpress website, https://www.systemoneservices.com. I think the error may be related to site speed and caching, but request a second opinion and potential solutions. Thanks, Rich
Link Explorer | | rweede0 -
How many pages Google crawl in free version
I am using moz pro 30 days trial version.Can Anybody tell me how many pages moz crawl in a day or in a week.Because its two days and only 2 pages they crawled. Thanks
Link Explorer | | VarinderS0 -
403 errors in Moz but not in Google Search Console
Hello, Moz is showing that one of the sites I manage has about ten 403 errors on main pages, including the home page. But when I go to Google Search Console, I'm not getting any 403 errors. I don't know too much about this site (I handle the SEO for a few sites as a contractor for a digital marketing agency), but I can see that it's a WordPress site (I'm not sure if that's relevant). Can I assume this a Moz issue only? Thanks, Susannah Noel
Link Explorer | | SusannahK.Noel0 -
How do I fix 885 Duplicate Page Content Errors appearing in my Moz Report due to categories?
Hi There, I want to set up my Moz report to send directly to a client however there are currently 885 duplicate page content errors displaying on the report. These are mostly caused by an item listed in multiple 'categories' and each category is a new pages/URL. I guess my questions are: 1. Does Google see these as duplicate page content? Or does it understand the categories are there for navigation purposes. 2. How do I clear these off my Moz report so that the client doesn't panic that there are some major issues on the site Thanks for your advice.
Link Explorer | | skehoe0 -
OnPage Grader double counting keywords on responsive site (hidden vs visible)
FYI - it appears that if you have a responsive site that has blocks of text that are duplicated, but Hidden or Visible depending on the screen width, that On-Page Grader will count any keywords in that text twice. I have text shown in one location to Desktop users that needed to be re-located to a different part of the page for Tablet and Phone users to keep the layout nice. And my OP Grader keyword count doesn't match what I saw on the page doing Ctrl-F to find the keywords, unless you count the Hidden text. (not hidden like cloaking or some black hat thing - just not displayed on certain devices) I guess On Page Grader just reads the source code and ignores whether the text is hidden or visible. It would be nice if it read the code as if it was a Desktop device. (suggestion for Moz staff) Does anybody know if Google also ignores device dependent Hidden vs Visible areas???
Link Explorer | | GregB1230