Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
520 Error from crawl report with Cloudflare
-
I am getting a lot of 520 Server Error in crawl reports. I see this is related to Cloudflare. We know 520 is Cloudflare so maybe the Moz team can change this from "unknown" to "Cloudflare 520". Perhaps the Moz team can update the "how to fix" section in the reporting, if they have some possible suggestions on how to avoid seeing these in the report of if there is a real issue that needs to be addressed. At this point I don't know.
There must be a solution that Moz can provide like a setting in Cloudflare that will permit the Rogerbot if Cloudflare is blocking it because it does not like its behavior or something.
It could be that Rogerbot is crawling my site on a bad day or at a time when we were deploying a massive site change. If I know when my site will be down can I pause Rogerbot?
I found this https://developers.cloudflare.com/support/troubleshooting/general-troubleshooting/troubleshooting-crawl-errors/
-
A 520 error is an HTTP error code that indicates that Cloudflare was unable to establish a connection to the origin server. This can happen for a variety of reasons, including:
Server downtime: The origin server might be down or undergoing maintenance.
Firewall restrictions: The origin server might have a firewall that is blocking requests from Cloudflare.
DNS issues: There might be a DNS misconfiguration that is preventing Cloudflare from resolving the origin server's IP address.
SSL issues: There might be an issue with the SSL certificate on the origin server.
To troubleshoot the issue, you can try the following:
Check if the origin server is up and running.
Check if the origin server has a firewall that is blocking requests from Cloudflare.
Check if the DNS is configured correctly.
Check if the SSL certificate is valid and configured correctly.
If none of these steps resolve the issue, you can reach out to Cloudflare support for further assistance.
-
@awilliams_kingston To answer your question, there is no option to pause Rogerbot manually. However, Rogerbot only crawls a website when a Site Crawl campaign is active and scheduled to run. If you want to pause Rogerbot, you can stop the active campaign or schedule the next crawl to start at a later time.
To schedule a Site Crawl, go to your Moz Pro account, click on "Site Crawl" in the left-hand navigation menu, and select "Add Campaign" to set up a new campaign or select an existing one. From there, you can customize your crawl settings, including the crawl frequency and start time.
If you have a scheduled maintenance window and want to prevent Rogerbot from crawling your site during that time, you can adjust the crawl frequency to avoid overlapping with your maintenance schedule. You can also use a robots.txt file to block the crawler from accessing specific pages or sections of your site.
-
@awilliams_kingston The 520 server error you're seeing in your Moz crawl reports is related to Cloudflare. It's a generic error, which means it could be caused by a variety of issues, including server overload or misconfigured settings.
To address this, you could check your Cloudflare firewall settings and see if there are any rules that are blocking the Moz Rogerbot crawler. If there are, try adding an exception for the Rogerbot user agent to allow it to crawl your site without being blocked.
If you know your site will be down for maintenance or undergoing significant changes, you could pause the Moz crawler during that time to prevent it from generating false 520 errors in your reports.
Finally, you could check out the troubleshooting guide in the Cloudflare documentation for more information on identifying and addressing crawl errors. Remember to work with both Moz and Cloudflare support teams to find a solution that works for your specific setup.
-
@Kateparish Thank you.
How do you pause Rogerbot? I can't find anything on that in my admin panel but maybe it is because there is no crawl happening at the moment and my next crawl is scheduled to happen in a few days. Also, is there a way to schedule a pause if a crawl is happening? If I know I have site maintenance on a certain day of the week a specific time, for example, I can have Rogerbot take a break? -
A 520 error typically indicates a connection error between Cloudflare and the origin server. This error occurs when the server returns an empty or invalid response to Cloudflare, or when the server takes too long to respond.
To troubleshoot a 520 error from a crawl report with Cloudflare, you can take the following steps:
Check the server logs: The first step in troubleshooting a 520 error is to check the server logs for any error messages. Look for any errors related to the server's network or connectivity, such as DNS resolution issues, network timeouts, or firewall restrictions.
Check Cloudflare logs: Cloudflare logs can provide additional insights into the cause of the error. Check the Cloudflare logs for any error messages or connection issues between Cloudflare and the origin server.
Temporarily disable Cloudflare: Temporarily disabling Cloudflare can help you determine if the error is caused by Cloudflare or the origin server. If the error disappears when Cloudflare is disabled, then the issue is likely with Cloudflare.
Contact Cloudflare support: If you are unable to resolve the issue on your own, you can contact Cloudflare support for assistance. Provide them with the server logs and Cloudflare logs, as well as any other relevant information, to help them diagnose the issue.
By following these steps, you should be able to identify and resolve the 520 error from the crawl report with Cloudflare.
-
@awilliams_kingston The 520 server error you're seeing in your Moz crawl reports is related to Cloudflare. It's a generic error, which means it could be caused by a variety of issues, including server overload or misconfigured settings.
To address this, you could check your Cloudflare firewall settings and see if there are any rules that are blocking the Moz Rogerbot crawler. If there are, try adding an exception for the Rogerbot user agent to allow it to crawl your site without being blocked.
If you know your site will be down for maintenance or undergoing significant changes, you could pause the Moz crawler during that time to prevent it from generating false 520 errors in your reports.
Finally, you could check out the troubleshooting guide in the Cloudflare documentation for more information on identifying and addressing crawl errors. Remember to work with both Moz and Cloudflare support teams to find a solution that works for your specific setup.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawler is not able to crawl my website
Hi, i need help regarding Moz Can't Crawl Your Site i also share screenshot that Moz was unable to crawl your site on Mar 26, 2022. Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
Technical SEO | | JasonTorney
my robts.txt also ok i checked it
Here is my website https://whiskcreative.com.au
just check it please as soon as possibe0 -
Sitemap error in Webmaster tools - 409 error (conflict)
Hey guys, I'm getting this weird error when I submit my sitemap to Google. It says I'm getting a 409 error in my post-sitemap.xml file (https://cleargear.com/post-sitemap.xml). But when I check it, it looks totally fine. I am using YoastSEO to generate the sitemap.xml file. Has anyone else experienced this? Is this a big deal? If so, Does anyone know how to fix? Thanks EwTswL4
Technical SEO | | Extima-Christian0 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
Schema Markup Errors - Priority or Not?
Greetings All... I've been digging through the search console on a few of my sites and I've been noticing quite a few structured data errors. Most of the errors are related to: hcard, hentry and hatom. Most of them are missing author & entry-title, while the other one is missing: fn. I recently saw an article on SEL about Google's focus on spammy mark-up. The sites I use are built and managed by vendors, so I would have to impress upon them the impact of these errors and have them prioritize, then fix them. My question is whether or not this should be prioritized? Should I have them correct these errors sooner than later or can I take a phased approach? I haven't noticed any loss in traffic or anything like that, I'm more focused on what negative impact a "phased approach" could have. Any thoughts?
Technical SEO | | AfroSEO0 -
429 Errors?
I have over 500,000 429 errors in webmaster tools. Do I need to be concerned about these errors?
Technical SEO | | TheKrazyCouponLady0 -
404 Errors After Site Migration
Hello - I'm working on a website selling fashion accessories. The site just went through a site migration from Yahoo! to Big Commerce. Now we have a high level of warnings and errors from the crawl. Few are mentioning sites I never seen before on the Yahoo! platform. I also notice that the pages crawled has doubled. How can I fix or did I do something wrong with migration? I was running the website with minimal errors and now overwhelmed with errors all the error updates. If I can get some assistance on what could be wrong, I would greatly appreciate. Thanks.
Technical SEO | | ShopChameleon0 -
Cloudflare Email Address Protecting & SEO
I have a client who uses Cloudflare to protect their email address on their website from spam and when I crawl their site looking for errors it causes it to report multiple broken links on each page. Does anyone know how this impacts SEO? Should I recommend removing Cloudflare?
Technical SEO | | farlandlee0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340