605 : Page banned by robots.txt
-
Hello everyone,
I need experts help here, Please suggest, I am receiving crawl errors for my site that is , X-Robots-Tag: header, or tag.
my robots.txt file is:
User-agent: *
Disallow:
-
Hey there! I just followed up on the message you sent into our help team, but I wanted to also post the answer here for reference.
It looks like the robots.txt file may have recently been changed for the site because I created a new campaign for the subdomain and I am not getting that same error. You should no longer see this error on your next campaign update or you could create a new campaign and you would no longer see the error there.
I did notice that you ran a number of crawl tests on the site since the campaign update, but the important thing to realize is that the crawl test can be cached for up to 48 hours. (I removed the crawls in this version of the screenshot for privacy.) We also cache the crawl tests from campaign crawls, so it looks like the first crawl test you ran on the 29th was cached from your campaign crawl and the two subsequent crawl tests were cached from that first crawl test.
Again, I wanted to note that it looks like there are only links to about 2 other pages (terms and privacy) that are on the specific subdomain you are tracking, so we aren't able to crawl beyond those pages. When you limit a campaign to a specific subdomain, we can only access and crawl links that are within the same subdomain.
-
I am at a lost, I can't find the issue. Let us know what Moz says.
-
I actually have come across a handful URLs that are NoIndex, I'll DM you a list once complete.
I can't be certain this is the root of the problem (I've never seen this error in the crawl report), but based on the error you said you're getting, I believe it's a great starting point.
-
Hi Logan Ray
thank you for detailed guide, all tools bot are working perfectly except moz's. My robots meta is index, follow and my robots.txt is disallow for none for all user agents. Still there is confusion that why moz is showing crawl error. I have now emailed to moz. Let's see what they reply. I will share that.
thank you
-
Hi,
This sounds like it's more related to the meta robots tag, not the robots.txt file.
Try this:
- Run a Screaming Frog crawl on your site
- Once complete, go to the Directives tab
- Look for 'NoIndex' in the 'Meta Robots 1' column (should be the 3rd column)
- If you see anything marked with that tag, remove them - unless of course you need them there for a reason, in which case you should also block that page in your robots.txt file
-
Are you able to provide a link to site (DM me if you don't want it posted on the forum)
-
I am receiving crawl error for moz only.
There is no error at google's search console. Also, I have tested at google's robots.txt testing tool. https://www.google.com/webmasters/tools/robots-testing-too
My robots.txt file is with no slash.
User-agent: *
Disallow: -
Hi Bhomes,
Try clearing you robots.txt of any content, a robots.txt with:
User-agent: *
Disallow:/
Is blocking everything from crawling your site. See: https://support.google.com/webmasters/answer/6062598?hl=en for testing and more details on robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Top pages according Link Explorer shows 301 pages and images
I was looking at the top pages on my website using Link Explorer, it contains 301 pages and image URLs. Is this expected or I should be doing something to fix it? URL with second highest PA in site is a page already 301 redirected more than 6 months back. URL with highest PA in my blog is a random image in the blog, rather than any blog posts. Is this normal? Thanks -Aji
Moz Bar | | ajiabs1 -
When I try to run a Moz report, it sends me to a 404 page?
Hey there. I'm trying to export a .pdf to send to my client. When I click "export pdf", the page sits for a second then goes to a 404 page? I've never seen this before. Is anyone else getting this problem?
Moz Bar | | TaylorRHawkins2 -
I keep getting a 429 Too Many Request error for a wp-login page on my website. Is there a way to prevent that from happening, or fix outside of redirecting on the back end of WordPress?
I have a client that keeps coming up with 429 Too Many Request critical crawl errors. I re-directed some of them on the back end of WordPress, but additional keep coming in. The URL has a WP-Login and directs to back end login section. Is there a reason that would come up as an error, how can I prevent it from happening again, and how can I fix the remaining current errors outside of redirecting back to /? Thanks, Kalyn Lengieza
Moz Bar | | GrindstoneConsult0 -
902 Error and Page Size Limit
Hello, I am getting a 902 error when attempting to crawl one of my websites that was recently upgraded to a modern platform to be mobile friendly, https, etc. After doing some research it appears this is related to the page size. On Moz's 902 error description it states: "Pages larger than 2MB will not be crawled. For best practices, keep your page sizes to be 75k or less." It appears all pages on my site are over 2MB because Rogbot is no longer doing any crawling and not reporting issues besides the 902. This is terrible for us because we purchased MOZ to track and crawl this site specifically. There are many articles which show the average page size on the web is well over 2MB now: http://www.wired.com/2016/04/average-webpage-now-size-original-doom/ Due to that I would imagine other users have come up against this as well and I'm wondering how they handled it. I hope Moz is planning to increase the size limit on Rogbot as it seems we are on a course towards sites becoming larger and larger. Any insight or help is much appreciated!
Moz Bar | | Paul_FL0 -
Site Crawl report show strange duplicate pages
Beginning in early in Feb, we got a big bump in duplicate pages. The URLs of the pages are very odd: Example URL:
Moz Bar | | Neo4j
http://firstname.lastname@website.com/dir/page.php
is duplicate with http://website.com/dir/page.php I checked though the site, nginx conf files, and referral pages, and could not find what is prefixing the pages with 'http://firstname.lastname@'. Any ideas? The person whose name is 'Firstname Lastname' is stumped as well. Thanks.0 -
On-page Grader API?
Hi guys, I have a massive list of keywords i want to optimise against a specific URL. I'm checking them manually with the MOZ on-page grader to determine if the keyword is optimised for the URL (Grade A score). I was wondering if there is a way to pull the scores automatically for each URL with the MOZ api? Or does anyone know of any tools which can automate this process? Cheers, Chris
Moz Bar | | jayoliverwright1 -
Meta Robots "Index, Follow"
In my MozBar under "General Attributes" it says "index, follow" next to Meta Roberts for one of our client's websites. I've never seen "index, follow" before. I've seen it say "not found." What does index, follow mean and is that a bad thing? I know the reason should be obvious but this site has had a lot of problems and I'm wondering if this is related.
Moz Bar | | SEOhughesm1 -
Error 605
Hi I have been getting the ollowing error on my dashboard the last 6-8 weeks error 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag It does seem to be able to crawl the site so what does this mean? The only issue I see is that I don't get any crawl errors listed. Could this be a prob with moz? I had this site checked on a different account on moz and there didnt get the error! The site is http://www.copyfaxes.com Thanks
Moz Bar | | henya0