605 : Page banned by robots.txt
-
Hello everyone,
I need experts help here, Please suggest, I am receiving crawl errors for my site that is , X-Robots-Tag: header, or tag.
my robots.txt file is:
User-agent: *
Disallow:
-
Hey there! I just followed up on the message you sent into our help team, but I wanted to also post the answer here for reference.
It looks like the robots.txt file may have recently been changed for the site because I created a new campaign for the subdomain and I am not getting that same error. You should no longer see this error on your next campaign update or you could create a new campaign and you would no longer see the error there.
I did notice that you ran a number of crawl tests on the site since the campaign update, but the important thing to realize is that the crawl test can be cached for up to 48 hours. (I removed the crawls in this version of the screenshot for privacy.) We also cache the crawl tests from campaign crawls, so it looks like the first crawl test you ran on the 29th was cached from your campaign crawl and the two subsequent crawl tests were cached from that first crawl test.
Again, I wanted to note that it looks like there are only links to about 2 other pages (terms and privacy) that are on the specific subdomain you are tracking, so we aren't able to crawl beyond those pages. When you limit a campaign to a specific subdomain, we can only access and crawl links that are within the same subdomain.
-
I am at a lost, I can't find the issue. Let us know what Moz says.
-
I actually have come across a handful URLs that are NoIndex, I'll DM you a list once complete.
I can't be certain this is the root of the problem (I've never seen this error in the crawl report), but based on the error you said you're getting, I believe it's a great starting point.
-
Hi Logan Ray
thank you for detailed guide, all tools bot are working perfectly except moz's. My robots meta is index, follow and my robots.txt is disallow for none for all user agents. Still there is confusion that why moz is showing crawl error. I have now emailed to moz. Let's see what they reply. I will share that.
thank you
-
Hi,
This sounds like it's more related to the meta robots tag, not the robots.txt file.
Try this:
- Run a Screaming Frog crawl on your site
- Once complete, go to the Directives tab
- Look for 'NoIndex' in the 'Meta Robots 1' column (should be the 3rd column)
- If you see anything marked with that tag, remove them - unless of course you need them there for a reason, in which case you should also block that page in your robots.txt file
-
Are you able to provide a link to site (DM me if you don't want it posted on the forum)
-
I am receiving crawl error for moz only.
There is no error at google's search console. Also, I have tested at google's robots.txt testing tool. https://www.google.com/webmasters/tools/robots-testing-too
My robots.txt file is with no slash.
User-agent: *
Disallow: -
Hi Bhomes,
Try clearing you robots.txt of any content, a robots.txt with:
User-agent: *
Disallow:/
Is blocking everything from crawling your site. See: https://support.google.com/webmasters/answer/6062598?hl=en for testing and more details on robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Optimization Error
Hello.
Moz Bar | | csgosmurfcart
When I try to use 'On Page Grader' on specific site, I get an error message. "
Page Optimization Error
There was a problem loading this page. Please make sure the page is loading properly and that our user-agent, rogerbot, is not blocked from accessing this page.
"
example : https://www.csgosmurfkart.com Site's robots.txt settings are good. and I think there's no blocking factor. But On Page Grader cannot crawl the sites.
But campaign crawler is working well on the site. only On Page Grader is not working.. What should I change my server's setting or site's setting for crawling site on my site?
I'm using wordpress on google cloud Thank you.0 -
Page Optimization problem in Moz
Hello there, I've been using Moz Pro for a year now, and encountered I've encountered small issues here and there (like some features not working currently, but they start working after a few hours tops) but now I have a bigger issue. Page Optimization feature stopped working and it's dead for five days now. Even my older research is gone. (check the images) Anyone else having this problem, or is it just me? It says that the crawler is blocked or the page isn't working, and when I check the pages in other platforms with crawlers it all goes smooth. All of the pages are working, and all of the crawlers (tested even in Google console) are doing fine. I even had the Moz crawler on 04.07.2019 as the standard week crawl and I got the results. Everything else is working just fine, I have a problem with the Page Optimization feature only. Thanks in advance, Ivan Ga4LKkQ
Moz Bar | | BMGEmployee0 -
Is MOZ any good to analyze an e-commerce site? How come that a cms page can be seen as duplicate content with a category page?
Hi Guys, I've been using Moz for quite a long time now for 2 of my shops. Now I am in the process of launching the second shop and I just don't understand how is it possible that a cms static page (About US) to be seen as a duplicate content with other 96 pages - including product pages and other totally different pages such as delivery information, category pages, returns and so on. Really MOZ?? Is it me or you?? Your help would be much appreciated! Thank you!
Moz Bar | | Sorin_T0 -
MozBot Finding Duplicate Pages That Aren't Duplicate
I've been reviewing the technical audits for my campaign in Moz, and noticed I had a number of duplicate content issues that I'm not really sure how to address. When I click on the links of what the duplicates are, they are all different links that have different content/images. Based on what I was seeing other's wrote in the forum, this could be because the code base is really the same between these pages, and many of these were using query parameters (I'm assuming that is why the code is almost exactly the same across these pages), so example: website.com/tags/KEYWORD1?type=KEYWORD2 is a duplicate of website.com/tags/KEYWORD3?type=KEYWORD4 I was reading that I can use that URL Parameters area in google search console, but my search console says that the googlebot isn't experiencing issues, so I wasn't sure if that was the right move. I can't do the canonicals because these pages all have different content on them, and I know duplicate content is a big SEO issue, so I really wasn't sure what my next steps should be. Thanks for the help!
Moz Bar | | amaray4030 -
On-Page Grader URL inaccessible when copy/pasted but not when edited
Hi!, I've looked through multiple topics on this but none quite seem to fit what's going on - hopefully someone can help! I get the error message 'Sorry, but that URL is inaccessible.' when I copy and paste a url from my site into the search e.g. http://www.orbussoftware.com/enterprise-architecture/ However if I edit this to https the search completes fine. Since we redesigned our site approx 6 months ago, we've found most of our rankings have completely dropped off, and now I'm getting this error I'm wondering if it has something to do with how our site is structured? If I'm getting this error with Moz does that mean Google could be having issues too? Or is it all just a strange quirk? Thanks!
Moz Bar | | JennaOrbus0 -
Duplicate page found with MOZ crawl test?
When I crawl my website www.radiantguard.com, the crawl test comes back with what appears to be a duplicate of my home page: http://www.radiantguard.com and http://www.radiantguard.com/ Does the crawler indeed see two different pages and therefore, are my search engine rankings potentially affected, AND is this because of how my rel canonical is set up?
Moz Bar | | rhondafranklin0 -
Re On-Page Grader
One of the pages I'm trying to optimise is achieving an 'A' grade, however all the ticks are black not green as I've seen on other page grade. Why is this? Help much appreciated. Thanks
Moz Bar | | seoman100 -
Crawl Diagnostics: How many pages (deep) will it crawl for dup content
Does anyone know how deep the crawl diagnostics will crawl when searching for dup content? Will it crawl the entire site, or will it only crawl "x" amount of pages? Thanks!
Moz Bar | | tdawson090