Crawl error robots.txt
-
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears:
**Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster.
Can help us?
Thanks!
-
@Linda-Vassily yes
-
The page is: https://frizzant.com/ And don't have noindex
-
Thanks Lind and Tawny! i 'll check it
-
Hey there!
This is a tricky one — the answer to these questions is almost always specific to the site and the Campaign. For this Campaign, it looks like your robots.txt file returned a 403 forbidden response to our crawler: https://www.screencast.com/t/f42TiSKp
Do you use any kind of DDOS protection software? That can give our tools trouble and cause us to be unable to access the robots.txt file for your site.
I'd recommend checking with your web developer to make sure that your robots.txt file is accessible to our user-agent, rogerbot, and returning a 200 OK status for that user-agent. If you're still having trouble, it'll be easier to assist you if you contact us through help@moz.com, where we can take a closer look at your account and Campaign directly.
-
I just popped that into ScreamingFrog and I don't see a noindex on that page, but I do see it on some other pages. (Though that shouldn't stop other pages from being crawled.)
Maybe it was just a glitch that happened to occur at the time of the crawl. You could try doing another crawl and see if you get the same error.
-
The page is: http://www.yogaenmandiram.com/ And don't have noindex
-
Hmm. How about on the page itself? Is there a noindex?
-
Yes, our robots.txt it's very simple:
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php -
That just says that you are blocking the Moz crawler. Take a look at your robots.txt file and see if you have any exclusions in there that might cause that page not to be crawled. (Try going to yoursite.com/robots.txt or you can learn more about this topic here.)
-
Sorry, the image don't appear
Try now -
It looks like the error you are referring to did not come through in your question. Could you try editing it?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are only two pages of my site being crawled by MOZ?
MOZ is only crawling two pages of my site even though there are links on the homepage, and the robot.txt file allows for crawling. Our site address is www.hrparts.com.
Product Support | | hrparts0 -
Website can't be crawled
Hi there, One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me? Thanks, Tamara
Product Support | | Yenlo0 -
Error Code 612 with Squarespace
Hi Moz community,
Product Support | | BrownBox
I'm getting an apparent error code 612 on my homepage. I've checked my robot.txt file with https://httpstatus.io/ as well as https://webmaster.yandex.com/ like others have suggested, but they are both showing nothing wrong. Is this something to do with square space? Is it incompatible with Moz? My site is https://brownbox.ca Thanks.0 -
How to block Rogerbot From Crawling UTM URLs
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like: User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
Product Support | | Firestarter-SEO0 -
Google plus and 404 error message
So this is strange - i created a bunch of pages on Google Plus - and now that I started to use MOZ I noticed that there were 95 404 errors - When I look at them I see the Google+ there ( see example) Has any one experienced this issue? If yes, anyway to fix these 404 errors? or what's the best way to address this issue?http://www.accu-time.com/solutions-2/google.com/+AccuTimeSystemsIncEllington
Product Support | | carlosbernal100 -
Latest Crawl Stats Not Showing
I got the email that Moz crawled my site for the week of February 8. I expected to log in and find some shinny new data, but new stats are not showing in my dashboard yet. Is there a delay? Is there something wrong?
Product Support | | ChadC0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0 -
Showing 302 redirection error instead of 404
Moz is showing 302 error instead of 404 in the Crawl Diagnostics Summary report. But there is no such page uploaded ever to the site. Please let me know why this is happening. eg: http://www.zco.com/images/2d-game-development.aspx
Product Support | | zco_seo0