Error Code 612 with robots.txt 200
-
Hi! I am getting this message Error Code 612: Error response for robots.txt, so the crawler do not check any page of the site. The status code for the robots.txt is 200 and it does not seem Googlebot has any problem crawling the site, so I don't know what the matter is.
The site is http://www.musicopolix.com/
Thanks so much in advance for any help!
-
Perfect.. sounds like "It is also possible you are blocking bots from accessing the page with the host or via htaccess.." was in the right direction!
Cheers,
Jake
-
Hi! Thanks for your answer! We found the problem with the help of Moz support team. Roger was being blocked by our server (http status code 403) but now it is solved and Roger can crawl our site without any problem.
-
I can confirm there is an issue here -- For example: https://webmaster.yandex.com/robots.xml allows you to test robots.txt on your site, and is currently stating it cannot load the file.
Have you checked the server logs to look for any errors/timeouts when crawlers are trying to load the file?
It is also possible you are blocking bots from accessing the page with the host or via htaccess..
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Account Error
Hey I have Moz account free trail for 30 days whenever I try to analyze website that is based on it show error please solve my problem.
Link Explorer | | cihiloj7770 -
Crawling 4XX errors because of URL accented
Hello guys, I am experiencing crawling errors in Moz because the URLs read by spiders contain accents and special characters, which I know isn't best but yet my client needs to keep it. I know that Moz "uses percent encoding to parse the HTML in the source code, so any line breaks and spaces in your HTML links or sitemap links are converted to %0A and %20, causing a 404 error". Is there any way to avoid these errors happening in the dashboard? Or am I supposed to simply ignore it?
Link Explorer | | fran8751 -
Crawl Errors on a Wordpress Website
I am getting a 902 error, "Network Errors Prevented Crawler from Contacting Server" when requesting a site crawl on my wordpress website, https://www.systemoneservices.com. I think the error may be related to site speed and caching, but request a second opinion and potential solutions. Thanks, Rich
Link Explorer | | rweede0 -
OSE error?
Hi, I just started using moz pro, but if i try to check ose, I get this error: There was an error getting your data What's wrong?
Link Explorer | | NielsPNO0 -
403 errors in Moz but not in Google Search Console
Hello, Moz is showing that one of the sites I manage has about ten 403 errors on main pages, including the home page. But when I go to Google Search Console, I'm not getting any 403 errors. I don't know too much about this site (I handle the SEO for a few sites as a contractor for a digital marketing agency), but I can see that it's a WordPress site (I'm not sure if that's relevant). Can I assume this a Moz issue only? Thanks, Susannah Noel
Link Explorer | | SusannahK.Noel0 -
How do I fix 885 Duplicate Page Content Errors appearing in my Moz Report due to categories?
Hi There, I want to set up my Moz report to send directly to a client however there are currently 885 duplicate page content errors displaying on the report. These are mostly caused by an item listed in multiple 'categories' and each category is a new pages/URL. I guess my questions are: 1. Does Google see these as duplicate page content? Or does it understand the categories are there for navigation purposes. 2. How do I clear these off my Moz report so that the client doesn't panic that there are some major issues on the site Thanks for your advice.
Link Explorer | | skehoe0 -
Remove Meta Description Errors
When Moz shows pages that are 404's you are able to remove them after the issue is fixed. Is there a way to do this for missing Meta Descriptions? It shows that numerous pages are missing them, but when I go look, it's already been corrected and it is still showing months later.
Link Explorer | | seomozinator0 -
Moz crawler showing pages blocked by robots.txt
I've blocked a large number of pages which Moz were showing as duplicate or giving 404's in our robots.txt using /?key and /?p etc. However Moz crawler is still showing as being an issue. I assumed Roger picked up the robots.txt file, or is that not the case?
Link Explorer | | ahyde0