605 : Page banned by robots.txt
-
Hello everyone,
I need experts help here, Please suggest, I am receiving crawl errors for my site that is , X-Robots-Tag: header, or tag.
my robots.txt file is:
User-agent: *
Disallow:
-
Hey there! I just followed up on the message you sent into our help team, but I wanted to also post the answer here for reference.
It looks like the robots.txt file may have recently been changed for the site because I created a new campaign for the subdomain and I am not getting that same error. You should no longer see this error on your next campaign update or you could create a new campaign and you would no longer see the error there.
I did notice that you ran a number of crawl tests on the site since the campaign update, but the important thing to realize is that the crawl test can be cached for up to 48 hours. (I removed the crawls in this version of the screenshot for privacy.) We also cache the crawl tests from campaign crawls, so it looks like the first crawl test you ran on the 29th was cached from your campaign crawl and the two subsequent crawl tests were cached from that first crawl test.
Again, I wanted to note that it looks like there are only links to about 2 other pages (terms and privacy) that are on the specific subdomain you are tracking, so we aren't able to crawl beyond those pages. When you limit a campaign to a specific subdomain, we can only access and crawl links that are within the same subdomain.
-
I am at a lost, I can't find the issue. Let us know what Moz says.
-
I actually have come across a handful URLs that are NoIndex, I'll DM you a list once complete.
I can't be certain this is the root of the problem (I've never seen this error in the crawl report), but based on the error you said you're getting, I believe it's a great starting point.
-
Hi Logan Ray
thank you for detailed guide, all tools bot are working perfectly except moz's. My robots meta is index, follow and my robots.txt is disallow for none for all user agents. Still there is confusion that why moz is showing crawl error. I have now emailed to moz. Let's see what they reply. I will share that.
thank you
-
Hi,
This sounds like it's more related to the meta robots tag, not the robots.txt file.
Try this:
- Run a Screaming Frog crawl on your site
- Once complete, go to the Directives tab
- Look for 'NoIndex' in the 'Meta Robots 1' column (should be the 3rd column)
- If you see anything marked with that tag, remove them - unless of course you need them there for a reason, in which case you should also block that page in your robots.txt file
-
Are you able to provide a link to site (DM me if you don't want it posted on the forum)
-
I am receiving crawl error for moz only.
There is no error at google's search console. Also, I have tested at google's robots.txt testing tool. https://www.google.com/webmasters/tools/robots-testing-too
My robots.txt file is with no slash.
User-agent: *
Disallow: -
Hi Bhomes,
Try clearing you robots.txt of any content, a robots.txt with:
User-agent: *
Disallow:/
Is blocking everything from crawling your site. See: https://support.google.com/webmasters/answer/6062598?hl=en for testing and more details on robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl report shows that it gets 4xx errors for pages that work fine. Why?
On the crawl report it has all these "Critical Crawler Issues". They all say "4xx Error", yet when i click on the link from the crawler report, it goes to a perfectly functioning page, not a 404 page or anything. If i click in it actually says it's a 403 error. It's all for pages generated by the IDX solution for our real estate website. Is Moz broken or am i missing something? Here are a couple examples: <dl class="crawl-page-details-list"> <dd class="crawl-page-details-list-emphasis">https://teamvivi.com/homes-for-sale-map-search/</dd> <dd class="crawl-page-details-list-emphasis"> <dl class="crawl-page-details-list"> <dd class="crawl-page-details-list-emphasis">https://teamvivi.com/email-alerts/</dd> </dl> </dd> </dl>
Moz Bar | | TeamViviRealEstate0 -
How can I find duplicate pages from a Moz Crawl?
We have many duplicate pages that show up on the Moz Crawl, and we're trying to fix these but it's very difficult because I can't see a way to isolate the code where the duplicate is found. For instance, http://experiencemission.org/immersion/ is one of our main pages, and the crawl shows one duplicate of http://experiencemission.org/immersion. It appears that one of our staff manually edited the source code in one of our pages but forgot the trailing slash. This would be an easy fix but the problem is that this page is linked to internally on our website 2423 times, so it's next to impossible to find the code that is incorrect. We have many other pages with this same basic problem. We know we have duplicates, but it's next to impossible to isolate them. So my question is this: When viewing the Moz Crawl data is there any way to see where a specific duplicate page link is located on our website? Thanks for any and all help!
Moz Bar | | expmission0 -
What are the best tools to help analyse on page optimisation for pages on development server and not currently live
currently using seo quake and moz tool bar but wondered if there is a better suggestion that will look at pages that are only accessible on the internal network on development server. Very restricted in what can be installed
Moz Bar | | Dan-Moz0 -
Duplicate Page Titles detected, no relevant links shown
Moz is reporting duplicate page titles, but the relevant pages aren't being shown. The downloaded report doesn't show them either. Screencast: http://screencast.com/t/gkyRds6u
Moz Bar | | ElykInnovation0 -
URL inaccessible for On Page Grader
I am trying to use the on page grader however it is not working with my website. The URL is as follows: https://capbeast.com. I have been trying to read in older posts to see if https is now supported or not but have not found anything. I know there is no robots.txt issue as I am able to run the crawl test on our website fine. Is the issue on my end in regards to configuration or is due to DDos attacks? Any help would be appreciated. Thanks
Moz Bar | | MisterStitches0 -
Can't delete items from the on page grader
I check every single box and they don't delete. This is driving me nuts. Please can you delete them for me because I am not impressed with this AT ALL. In fact I am getting so cross I am in danger of screaming hysterically which might get me the sack and it would be all your fault. That was slightly tongue in cheek, but please can you fix it please. please.
Moz Bar | | CommT0 -
Moz On-page is not working
My on-page is not working....I do have 3 keywords in the 1º position of Google.pt and the Moz is not reporting nothing....bug?
Moz Bar | | Popbox0 -
Find all the back links to all the posts/pages within the blog subdirectory only.
Hi, I am new to Moz. I using the open site explorer to find backlinks for a website's blog. The website itself is huge. I want to find all the backlinks to all the posts/pages within the blog subdirectory only. Not the regular website. I ran a few reports, but it is giving me links to that page, not all the sub pages.Fi
Moz Bar | | DarrenD0