5xx Crawl Issue might not be issues at all. Help
-
Hi,
I ran a crawl test on our website and it came back with 900 5xx potential errors. When I started opening these links 1 by 1 I could see they were actually working. So i exported the full list of 900 and went to the website: https://httpstatus.io/ pasted the links by 100 and used that. They came back with status codes of 301 / 301 / 200 which i believe means they are okay.
After reading it says that my programmer may need to see if we are blocking the MOZ BOT or to slow the MOZ BOT down. I guess I'm wondering if this is not done is the site actually having these 5xx errors when Google is Crawling or is it just showing 900 errors because of MOZ BOT but actually things are okay?
I know the simple answer is to get the programmer to fix the MOZ BOT issue to know for sure but getting programmers to do things take a lot of time so I'm trying to get a better idea here.
Thanks for your input.
-
Hi there!
Thanks so much for the great question! I'm so sorry to hear you're having this trouble with the 5xx errors. To resolve this we'd recommend adding a crawl delay for rogerbot to your robots.txt file. That crawl delay would look something like this:
User-agent: rogerbot
Crawl-delay: 10This will tell our crawler to slow down when it's crawling. We do not recommend using a crawl delay of longer than 10 as this can keep the crawl from completing.
As far as whether this is impacting Google's ability to crawl, I'm really not able to help identify that. I'm so sorry about that! The best suggestion I can make would be to check the server logs for your site to see how it is responding to other crawlers you may be concerned about.
If you have any other questions about rogerbot or the our tools, please feel free to send an email on over to help@moz.com.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can't Crawl Site - but deducting crawls.
Why am I being deducted crawls if MOZ keeps telling me that it can't crawl my site?
Getting Started | | BloggyMoms1 -
Moz site crawl doesn't work
The Moz site crawl isn't working for my campaign, but works for the site's on demand crawl. The search should not be disallowed by robots.txt or the headers. I'd like to be able to track the website for the campaign so I can see SEO gains / losses and increases / decreases in indexing.
Getting Started | | DrainKing0 -
Moz can't crawl my site.
Moz cannot carry out the site crawl on my online shop. Not really sure what the issue is, it has no problem getting onto my site when you use www. before the address, but it needs to be able to access bluerinsevintage.co.uk Stuck as what to do, we are a shopify store. Anyone else had this problem, or know what i need to change so they can crawl the site? thjis is the page they are getting when trying to get on bluerinsevintage.co.uk but if they use www.bluerinsevintage.co.uk the site comes up. Adam
Getting Started | | bluerinsevintage0 -
Why do ignored crawl issues still count as issues?
I use Cloudflare, so I can't avoid the Crawl Error for "Pages with no Meta Noindex" because of the way Cloudflare protects email addresses from harvesting (it creates a new page that has no meta noindex values). I marked this issue as "ignore" because there's nothing I can do about it, and it doesn't really affect my site's performance from an SEO standpoint. But even marked as ignore, it is still included in my site crawl issues count. Of course, I want to see that issues count drop to zero, but that can't happen if the ignored issues are counted. I don't want mark it fixed, because technically it's not fixed. KwPld
Getting Started | | troy.brophy0 -
A lot of duplicate content issues - does Moz understand canonical URL?
Hi, Since I subscribed to Moz my Magento store has given a lot of duplicate content issues. However, I did have a problem with Canonical URL at the time. It has been settled for a couple of weeks by now and although I had 302 redirects before, I configured Magento to 301 today. Since Moz has been crawling and showing duplicate content for exactly the same Magento pages but with endings like store=us, store=aus etc (since I have several store views enabled), I am wondering whether canonical URL does actually help Google to skip these versions of the duplicate pages and does Moz also understand it and will it reduce the amount of duplicate content errors once the 301 redirects and canonical URLs have been properly set for a week or so? Thanks!
Getting Started | | speedbird12290 -
Why wont rogerbot crawl my page?
How can I find out why rogerbot won't crawl an individual page I give it to crawl for page-grader? Google, bing, yahoo all crawl pages just fine, but I put in one of the internal pages fo page-grader to check for keywords and it gave me an F -- it isn't crawling the page because the keyword IS in the title and it says it isn't. How do I diagnose the problem?
Getting Started | | friendoffood0 -
Cant download my crawl csv
When I click on the [download csv] in my crawl campaign section nothing happens.
Getting Started | | digitalmedialounge0