612 : Page banned by error response for robots.txt
-
Hi all,
I ran a crawl on my site https://www.drbillsukala.com.au and received the following error "612 : Page banned by error response for robots.txt."Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue.
I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culpritOne thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol.
I'd welcome any feedback you might have. Thanks in advance for your help.
Kind regards -
Hey there! Tawny from Moz's Help Team here.
After doing some quick searching, it looks like how you configure the rules for WAFs depends on what service you're using to host those firewalls. You may need to speak to their support team to ask how to configure things to allow our user-agents.
Sorry I can't be more help here! If you still have questions we can help with, feel free to reach out to us at help@moz.com and we'll do our best to assist you.
-
Hi, I am having the same issue.
Can you please tell me how you have created rule in Web Application Firewall to allow user agents rogerbot and dotbot.
Thanks!!
-
Hi Federico,
Thanks for the prompt. Yes, this solution worked. I'm hopeful that this thread helps others too because when I was troubleshooting the problem, the other threads were not helpful for my particular situation.
Cheers
-
Hi, did the solution work?
-
Hi Federico,
I think I have found the solution for this problem and am hopeful the crawl will be successful this time around. Based on further digging and speaking to the team at StackPath CDN, I have done the following:
- I added the following to my robots.txt file
User-agent: rogerbot
Disallow:User-agent: dotbot
Disallow:- I added a custom robots.txt file in my CDN which includes the above and then created a rule in my Web Application Firewall which allows user agents rogerbot and dotbot.
I'll let you know if the crawl was successful or not.
Kind regards
-
Thanks for your response Federico. I have checked my robots.txt tester in my Google Search Console and it said "allowed."
Oddly, it also happened on another site of mine that I'm also running through StackPath CDN with a web application firewall in place. This makes me wonder if perhaps the CDN/WAF are the culprits (?).
I'll keep poking around to see what I find.
Cheers -
Seems like an issue with the Moz crawler, as the robots.txt has no issues and the site loads just fine.
If you already tested your robots.txt using the Google Webmaster Tools "robots.txt Tester" just to be sure, then you should contact Moz here: https://moz.com/help/contact/pro
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Open Site Explore page titles show as "No Title"
This has been asked many times but I cannot find an answer. On Open Site Explorer, most URLs I enter show as "No title" which means you have to hover over the URL to see which page is being referred to. I know you'd want an example, so here's your own site Moz.com 🙂 CWrkxZQ
Link Explorer | | clifra0 -
How many pages Google crawl in free version
I am using moz pro 30 days trial version.Can Anybody tell me how many pages moz crawl in a day or in a week.Because its two days and only 2 pages they crawled. Thanks
Link Explorer | | VarinderS0 -
Can improving page authority on all pages help overall domain auth?
Our website is made up of approx 4000 pages - some of which are old pages with a pa of 1. I am removing these pages (ensuring they don't have links etc). Can removing so many low scoring and useless pages impact on domain authority. Our DA jumped 4 points and i wondered if this was helping Example 4000 pages in total 2500 have PA 1 - these get removed will it look better for DA that overall there are now no pages where PA =1 and the site now contains higher PA pages only?
Link Explorer | | acsilver0 -
Top Pages in OSE
Hey, I'm using OSE to determine top pages for some domains. This works great for some sites, but I'm finding with a few sites only the home page comes up in the 'top pages' list - is this because the home page is the only one with a PA above one? Cheers
Link Explorer | | wearehappymedia0 -
Open Site Explorer Top Pages https Title tag issue
Hello, everybody. I've noticed this strange thing in Top Pages reporting of Open Site Explorer. If i do a report for any website which doesn't have security certificate installed, such as http://www.hyperlinksmedia.com I get normal results, with title tag shown for every page, but, if i do a report for website with Security certificate, such as https://www.hyperlinksmedia.com, report is coming back for non-secure URL version (http:...), so, it says "No Title" for any pages. I wonder, if it influence PA and DA scores. Although, if i run full crawl test, then i shows all meta and title tags for SSL version of a website's url. Thanks!
Link Explorer | | seomozinator0 -
Moz crawler showing pages blocked by robots.txt
I've blocked a large number of pages which Moz were showing as duplicate or giving 404's in our robots.txt using /?key and /?p etc. However Moz crawler is still showing as being an issue. I assumed Roger picked up the robots.txt file, or is that not the case?
Link Explorer | | ahyde0 -
Duplicate Home Page
'Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings.' Can anyone tell me why this is the case? I know for a fact that their are no other duplicates of my home page, however moz disagrees. Cheers for any help. **Taylor **
Link Explorer | | Taylor1230 -
OnPage Grader double counting keywords on responsive site (hidden vs visible)
FYI - it appears that if you have a responsive site that has blocks of text that are duplicated, but Hidden or Visible depending on the screen width, that On-Page Grader will count any keywords in that text twice. I have text shown in one location to Desktop users that needed to be re-located to a different part of the page for Tablet and Phone users to keep the layout nice. And my OP Grader keyword count doesn't match what I saw on the page doing Ctrl-F to find the keywords, unless you count the Hidden text. (not hidden like cloaking or some black hat thing - just not displayed on certain devices) I guess On Page Grader just reads the source code and ignores whether the text is hidden or visible. It would be nice if it read the code as if it was a Desktop device. (suggestion for Moz staff) Does anybody know if Google also ignores device dependent Hidden vs Visible areas???
Link Explorer | | GregB1230