612 : Page banned by error response for robots.txt
-
Hi all,
I ran a crawl on my site https://www.drbillsukala.com.au and received the following error "612 : Page banned by error response for robots.txt."Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue.
I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culpritOne thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol.
I'd welcome any feedback you might have. Thanks in advance for your help.
Kind regards -
Hey there! Tawny from Moz's Help Team here.
After doing some quick searching, it looks like how you configure the rules for WAFs depends on what service you're using to host those firewalls. You may need to speak to their support team to ask how to configure things to allow our user-agents.
Sorry I can't be more help here! If you still have questions we can help with, feel free to reach out to us at help@moz.com and we'll do our best to assist you.
-
Hi, I am having the same issue.
Can you please tell me how you have created rule in Web Application Firewall to allow user agents rogerbot and dotbot.
Thanks!!
-
Hi Federico,
Thanks for the prompt. Yes, this solution worked. I'm hopeful that this thread helps others too because when I was troubleshooting the problem, the other threads were not helpful for my particular situation.
Cheers
-
Hi, did the solution work?
-
Hi Federico,
I think I have found the solution for this problem and am hopeful the crawl will be successful this time around. Based on further digging and speaking to the team at StackPath CDN, I have done the following:
- I added the following to my robots.txt file
User-agent: rogerbot
Disallow:User-agent: dotbot
Disallow:- I added a custom robots.txt file in my CDN which includes the above and then created a rule in my Web Application Firewall which allows user agents rogerbot and dotbot.
I'll let you know if the crawl was successful or not.
Kind regards
-
Thanks for your response Federico. I have checked my robots.txt tester in my Google Search Console and it said "allowed."
Oddly, it also happened on another site of mine that I'm also running through StackPath CDN with a web application firewall in place. This makes me wonder if perhaps the CDN/WAF are the culprits (?).
I'll keep poking around to see what I find.
Cheers -
Seems like an issue with the Moz crawler, as the robots.txt has no issues and the site loads just fine.
If you already tested your robots.txt using the Google Webmaster Tools "robots.txt Tester" just to be sure, then you should contact Moz here: https://moz.com/help/contact/pro
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Why my website is giving 4xx error
I was analyzing the website link, my website is giving me 4xx error. Google search console in not giving such error 27b42282-a5f6-4ad0-956a-91838633d5ad-image.png Any suggestion will be helpful. The site is on wordpress
Link Explorer | | VS-Gary0 -
OSE error?
Hi, I just started using moz pro, but if i try to check ose, I get this error: There was an error getting your data What's wrong?
Link Explorer | | NielsPNO0 -
How do I fix 885 Duplicate Page Content Errors appearing in my Moz Report due to categories?
Hi There, I want to set up my Moz report to send directly to a client however there are currently 885 duplicate page content errors displaying on the report. These are mostly caused by an item listed in multiple 'categories' and each category is a new pages/URL. I guess my questions are: 1. Does Google see these as duplicate page content? Or does it understand the categories are there for navigation purposes. 2. How do I clear these off my Moz report so that the client doesn't panic that there are some major issues on the site Thanks for your advice.
Link Explorer | | skehoe0 -
[No Title] for all Top Pages at Open Site Explorer
49 out of the top 50 pages of the domain www.parallels.com shows [No Title]. I find this to be a major concern. Pages have long been established, title tags haven't changed recently. Am seeing a fair amount of 301 and 404s showing up, but actual OK 200 pages still showing [No Title]. Also seeing some decreases in organic search traffic at Google. Might there be a correlation?
Link Explorer | | ChristianMKG0 -
Facebook likes in OSE when there is no Facebook Like button on the web page
In OSE how are page specific facebook likes calculated when the page does not have a Facebook like/share button on it? I have seen many pages with multiple Facebook likes counted against it in OSE but cannot see how people could be liking the page.
Link Explorer | | TheHutGroup0 -
Moz crawler showing pages blocked by robots.txt
I've blocked a large number of pages which Moz were showing as duplicate or giving 404's in our robots.txt using /?key and /?p etc. However Moz crawler is still showing as being an issue. I assumed Roger picked up the robots.txt file, or is that not the case?
Link Explorer | | ahyde0 -
Drastic Monthly Fluctuations in Page Link Metrics
We have experienced very drastic changes in our root domain numbers and as a result, have seen an odd impact on our DA. The data does not seem at all reliable at all. We went one month with over 50 recorded root domains and the next month that dropped over 75%. It doesn't make sense to be paying for a monthly pro account when the data is so clearly unreliable. What is going on? Looking for a good answer before closing our account!
Link Explorer | | TVape0 -
Does Feedburner URL of the Home Page Carry Link Equity?
Hi There, During an SEO Audit, I found that OSE categorizes Feedburner URL of root domains under link-equity passing and followed. For example, the following link has been categorized under link-equity passing and followed: http://feeds.feedburner.com/SpoonflowerBlog I have heard that a lot of SEOs saying feedburner links don't carry any link juice. If that's true, then why does OSE categorize feedburner URL of root domains under link-equity passing and followed? I would appreciate if someone from the Moz staff could take some to answer this. Thanks.
Link Explorer | | TopLeagueTechnologies1