605 : Page banned by robots.txt
-
Hello everyone,
I need experts help here, Please suggest, I am receiving crawl errors for my site that is , X-Robots-Tag: header, or tag.
my robots.txt file is:
User-agent: *
Disallow:
-
Hey there! I just followed up on the message you sent into our help team, but I wanted to also post the answer here for reference.
It looks like the robots.txt file may have recently been changed for the site because I created a new campaign for the subdomain and I am not getting that same error. You should no longer see this error on your next campaign update or you could create a new campaign and you would no longer see the error there.
I did notice that you ran a number of crawl tests on the site since the campaign update, but the important thing to realize is that the crawl test can be cached for up to 48 hours. (I removed the crawls in this version of the screenshot for privacy.) We also cache the crawl tests from campaign crawls, so it looks like the first crawl test you ran on the 29th was cached from your campaign crawl and the two subsequent crawl tests were cached from that first crawl test.
Again, I wanted to note that it looks like there are only links to about 2 other pages (terms and privacy) that are on the specific subdomain you are tracking, so we aren't able to crawl beyond those pages. When you limit a campaign to a specific subdomain, we can only access and crawl links that are within the same subdomain.
-
I am at a lost, I can't find the issue. Let us know what Moz says.
-
I actually have come across a handful URLs that are NoIndex, I'll DM you a list once complete.
I can't be certain this is the root of the problem (I've never seen this error in the crawl report), but based on the error you said you're getting, I believe it's a great starting point.
-
Hi Logan Ray
thank you for detailed guide, all tools bot are working perfectly except moz's. My robots meta is index, follow and my robots.txt is disallow for none for all user agents. Still there is confusion that why moz is showing crawl error. I have now emailed to moz. Let's see what they reply. I will share that.
thank you
-
Hi,
This sounds like it's more related to the meta robots tag, not the robots.txt file.
Try this:
- Run a Screaming Frog crawl on your site
- Once complete, go to the Directives tab
- Look for 'NoIndex' in the 'Meta Robots 1' column (should be the 3rd column)
- If you see anything marked with that tag, remove them - unless of course you need them there for a reason, in which case you should also block that page in your robots.txt file
-
Are you able to provide a link to site (DM me if you don't want it posted on the forum)
-
I am receiving crawl error for moz only.
There is no error at google's search console. Also, I have tested at google's robots.txt testing tool. https://www.google.com/webmasters/tools/robots-testing-too
My robots.txt file is with no slash.
User-agent: *
Disallow: -
Hi Bhomes,
Try clearing you robots.txt of any content, a robots.txt with:
User-agent: *
Disallow:/
Is blocking everything from crawling your site. See: https://support.google.com/webmasters/answer/6062598?hl=en for testing and more details on robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why can I see 404 pages in Google Analytics but nothing in the On-Demand Crawl?
Hello, I'm looking at some Google Analytics data for a website and can see a few 'Page not found's among the Page Titles, looking like these are 404 errors. To get a full list of what's 404-ing so I can get these redirected, the Moz on-demand crawl of the website has come back with no major errors and just a few metadata ones. Does anyone know any potential reasons why the audit has drawn a blank, and is there another way to get a comprehensive list of 404s, as I'm aware the Google Analytics data may not be covering all of them. Thanks very much Becky
Moz Bar | | becky.jenkins0 -
Page Authority drop to Zero with new Moz crawl
My Site having page authority about 15 to 17 and i added many new pages in end of November. Everything is fine in December Moz Crawl update (Page Authority - Domain Authority) but in recent update my DA id drop 3 points and all my pages authority vanished drop to 1. Can anyone help me out to understand whats going on there!!!
Moz Bar | | signsny0 -
Zendesk robots.txt
Hello! We have a Zendesk support site at support.zspace.com - our Moz crawl report is saying that there is 85 temporary redirect issues, mostly coming from our support site. My question is, does the Moz crawler use robots.txt? We have currently Disallow: /search in our robots.txt, but not /search/* Most of the temporary redirect URL's in the crawl error report look like this... https://support.zspace.com/hc/en-us/search/click?data=BAh7CjoHaWRp So would we need to add "Disallow: /hc/en-us/search/*" to get Moz crawler to ignore these?
Moz Bar | | zspace0 -
Duplicate page content
The MOZ crawler identifies pages as duplicate content which are not the same.
Moz Bar | | aignerart
The pages http://www.aignerart.com/abstracts-oil-painting/cicli-colora.html and http://www.aignerart.com/abstracts-oil-painting/murs-de-la-ville.html are marked duplicate but they are different paintings. Any ideas?0 -
On Page Grader can't access my URLs
HI- I am trying to grade some specific pages for keywords with the on page grader but it keeps telling me "Sorry, but that URL is inaccessible. " I can reach them via the browser and they are not https. Any thoughts? Here is a sample: www.bulkcandystore.com/kosher-candy Any help is appreciated. Ken
Moz Bar | | CandymanKen0 -
Is there a way to do a mass lookup of Page Authority?
Meaning, is there a way I could look at the Page Authority for 50 pages all in one place? Or do I need to use the MozBar and go page by page? Just looking to save some time. Thanks.
Moz Bar | | JoshKimber0 -
Duplicate Page Content Report on MOZ
Hi, I am just wondering as to the accuracy of this report - does it pick up all the duplicate on page content? Or is there a limit? We have an ecommerce store with a lot of copied and pasted descriptions - just wondering if there is a limit on how much the moz crawler picks up? In other words, once we fix what MOZ has detected, will there be more detected because it is limited to display say up to 200?? Hope you understand what I mean. Thanks
Moz Bar | | bjs20100