Error Code 612 with robots.txt 200
-
Hi! I am getting this message Error Code 612: Error response for robots.txt, so the crawler do not check any page of the site. The status code for the robots.txt is 200 and it does not seem Googlebot has any problem crawling the site, so I don't know what the matter is.
The site is http://www.musicopolix.com/
Thanks so much in advance for any help!
-
Perfect.. sounds like "It is also possible you are blocking bots from accessing the page with the host or via htaccess.." was in the right direction!
Cheers,
Jake
-
Hi! Thanks for your answer! We found the problem with the help of Moz support team. Roger was being blocked by our server (http status code 403) but now it is solved and Roger can crawl our site without any problem.
-
I can confirm there is an issue here -- For example: https://webmaster.yandex.com/robots.xml allows you to test robots.txt on your site, and is currently stating it cannot load the file.
Have you checked the server logs to look for any errors/timeouts when crawlers are trying to load the file?
It is also possible you are blocking bots from accessing the page with the host or via htaccess..
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz was unable to crawl your site on Jun 22, 2020\. We were unable to access your site due to a page timeout on your robots.txt, which prevented us from crawling the rest of your site.
Site: www.kpmg.us Getting robots.txt timeout fail since 02/29/20. We've checked our server logs and see no errors. Went through all the steps of the "Troubleshooter". Updated robots.txt to allow rogerbot full access: User-agent: rogerbot
Link Explorer | | KPMG-Search-Social
Disallow: Any ideas how to get roger to crawl my site????1 -
404 Error - Please Help us
When we checked valuable Top pages, we noticed two types of 404 pages listed in our domain Example 1 : www.test.com/www.test.com/ecommerce.html Example 2 : www.test.com/test/ecommerce.html But we do not see any such 404 page errors in the Google Webmaster tool. Moz Top Pages Section only shows these as errors So please advise, if these are major errors or not? If these are errors, please help us to fix this as we do not have such URLs in our domain Awaiting your urgent help
Link Explorer | | Intellect0 -
Crawl Errors on a Wordpress Website
I am getting a 902 error, "Network Errors Prevented Crawler from Contacting Server" when requesting a site crawl on my wordpress website, https://www.systemoneservices.com. I think the error may be related to site speed and caching, but request a second opinion and potential solutions. Thanks, Rich
Link Explorer | | rweede0 -
OSE error?
Hi, I just started using moz pro, but if i try to check ose, I get this error: There was an error getting your data What's wrong?
Link Explorer | | NielsPNO0 -
804 error preventing website being crawled
Hi For both subdomains https://us.sagepub.com and https://uk.sagepub.com crawling is being prevented by a 804 error. I can't see any reason why this should be so as all content is served through https. Thanks
Link Explorer | | philmoorse0 -
Error message coming up for Open Site Explorer
When in Open Site Explorer there seems to be an error getting the data for http://vagabondtoursofireland.ie/ or www.vagabondtoursofireland.ie I have used this with other websites and have never had a problem. Thanks.
Link Explorer | | Johnny_AppleSeed0 -
Moz crawler showing pages blocked by robots.txt
I've blocked a large number of pages which Moz were showing as duplicate or giving 404's in our robots.txt using /?key and /?p etc. However Moz crawler is still showing as being an issue. I assumed Roger picked up the robots.txt file, or is that not the case?
Link Explorer | | ahyde0