How do fix an 803 Error?
-
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare.
Is there anything that I should do or do I wait a week and hope it disappears?
803 Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
-
Kristina from Moz's Help Team here. Here is the working link to our Crawl Errors resource guide if you're still needing it!
https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics/errors-in-crawl-reports
-
It would be great to read more about this issue here. I would love to debug/troubleshoot the 803 Errors, but I have no idea where to start. One problem: It's not possible to adjust the crawl-speed/delay of the moz-bot so I can't tell it the bot is the problem or not. Any suggestion out there how to debug a 803 crawl error?
TIA,
Jörg
-
Hi Sha,
The first link with the complete list is not working. I would love to access it. Where can I find the link?
Thanks in advance, Michiel
-
Same here, I found error on 803 in an image, What to do now? Can you pls help?
Thnaks
-
Hi,
Found a 803 Error in an image. Does that mean I should compress/improve somehow the image, or is it a web server error?
Thank you,
-
So if it is a standard wordpress page would the issue likely to be with the wordpress code - or my on-page content?
-
Hi Zippy-Bungle,
To understand first why the 803 error was reported:
When a page is called, the web server sends header details of what's to be displayed. You can see a complete list of these HTTP header fields here.
One of the headers sent by the web server is Content-length, which indicates how many bytes the rest of the page is going to send. So let's say for example that content length is 100 bytes but the server only sends 74 bytes (it may be valid HTML, but the length does not match the content length indicated)
Since the web server only sent 74 bytes and the crawler expected 100 bytes the crawler sees a TCP close port error because it is trying to read the number of bytes that the webserver said it was going to send. So you get an 803 error.
Now browsers don't care when a mismatch like this happens because Content-length is an outdated component for modern browsers, but Roger Mozbot (the Moz crawler, identified in your logs as RogerBot) is on a mission to show you any errors that might be occurring. So Roger is configured to detect and report such errors.
The degree to which an 803 error will adversely affect crawl efficiency for search engine bots such as Googlebot, Bingbot and others will vary, but the fundamental problem with all 8xx errors is that they result from violations of the underlying HTTP or HTTPS protocol. The crawler expects all responses it receives to conform to the HTTP protocol and will typically throw an exception when encountering a protocol-violating response.
In the same way that 1xx and 2xx errors generally indicate a badly-misconfigured site, fixing them should be a priority to ensure that the site can be crawled effectively. It is worth noting here that bingbot is well known for being highly sensitive to technical errors.
So what makes the mismatch happen?
The problem could be originating from the website itself (page code), the server, or the web server. There are two broad sources:
- Crappy code
- Buggy server
I'm afraid you will need to get a tech who understands this type of problem to work through each of these possibilities to isolate and resolve the root cause.
The Moz Resource Guide on HTTP Errors in Crawl Reports is also worth a read in case Roger encounters any other infrequently seen errors.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 errors High Priority Issues in Moz Pro: change to 301 or not ?
Hi there, Moz Pro is showing us 404 errors on our site as High Priority Issues. These 404 errors regard deleted product pages, which we did not 301. Should we 301 them all backwards ? We have an ecommerce site. After reading How Should You Handle Expired Content? on Moz and a few other Q&A discussions I now know we should 301 each expired url and now we do so. My concern is with what was done in the past, and what we should do about it: for the past few years we have been leaving the pages on the site, creating a big amount of outdated url's without either content nor traffic in march our IT decided to delete these url's, and ask for a webpage removal in Google Search Console: we 301 only a 40 url's and 404 the other 3500 now 6 monthts after, we still have 2500 crawl errors in the Search Console, and Moz Pro finding each week new 404 errors Our SEO consultant says we should not bother about the errors shown in the Search Console. But I am concerned about these errors not reducing, and about Moz Pro High Priority Issues: should we 301 the url's to similar categories or products?
Moz Pro | | isabelledylag0 -
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Moz Pro | | JoshMcLean0 -
Huge spike in crawl errors today - mozbot ignoring noindex tag?
Hi Mozzers, Today I received a ton of errors and warnings in my weekly crawl due to the mozbot crawling my noindex'd search results pages, such as this - http://www.consumerbase.com/Mailing-Lists.html?q=Construction&type=bus&channel=all&page=7&order=title&orderBy=DESC See image: http://screencast.com/t/qaZzq78j2Udx Anyone else seen a similar error this week with their crawl? Thanks!
Moz Pro | | Travis-W0 -
The crawl report shows a lot of 404 errors
They are inactive products, and I can't find any active links to these product pages. How can I tell where the crawler found the links?
Moz Pro | | shopwcs0 -
Weird client errors . . .
SeoMoz is reporting a number of weird client errors. The 404 links all look like the following: http://www.bluelinkerp.com/http%3A/www.bluelinkerp.com/corporate/cases/Nella.asp What might be causing these weird links to be picked up? I couldn't find any way within the SEOmoz interface to track down the source of these links . . .
Moz Pro | | BlueLinkERP0 -
Does SEOmoz give a way to know what link on what page produces the 404 errors that SEOmoz is telling me I have??
SEOmoz gives me a report of 404 errors on my site. Do they give a way to know from what link on what page produces the error?
Moz Pro | | MeridianGroup0 -
How do I fix a duplicate content error with a top level domain?
Hi, I'm getting a duplicate content error from the SEOmoz crawler due to an issue with trailing slashes. It's showing www.milengo.com and www.milengo.com/ as having duplicate page titles. However I'm pretty sure this has been fixed in the .htaccess file since if you type in the domain with a trailing slash it automatically redirects to the domain without a trailing slash, so this shouldn't be an issue. I'm stuck here. Any ideas? Thanks. Rob
Moz Pro | | milengo0 -
How to Fix the Errors with Duplicate Title or Content?
The latest Crawl Diagnostic has found 160 Errors on my site.
Moz Pro | | hanmark
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain. What does it matter (with or without www)? How serious is that error? Do I need to fix all the errors (and hundreds of warnings too)? What's the best practice? Is there any Guide on how to do it
or Tools for doing it the fast way? Viggo Joergensen0