How do fix an 803 Error?
-
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare.
Is there anything that I should do or do I wait a week and hope it disappears?
803 Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
-
Kristina from Moz's Help Team here. Here is the working link to our Crawl Errors resource guide if you're still needing it!
https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics/errors-in-crawl-reports
-
It would be great to read more about this issue here. I would love to debug/troubleshoot the 803 Errors, but I have no idea where to start. One problem: It's not possible to adjust the crawl-speed/delay of the moz-bot so I can't tell it the bot is the problem or not. Any suggestion out there how to debug a 803 crawl error?
TIA,
Jörg
-
Hi Sha,
The first link with the complete list is not working. I would love to access it. Where can I find the link?
Thanks in advance, Michiel
-
Same here, I found error on 803 in an image, What to do now? Can you pls help?
Thnaks
-
Hi,
Found a 803 Error in an image. Does that mean I should compress/improve somehow the image, or is it a web server error?
Thank you,
-
So if it is a standard wordpress page would the issue likely to be with the wordpress code - or my on-page content?
-
Hi Zippy-Bungle,
To understand first why the 803 error was reported:
When a page is called, the web server sends header details of what's to be displayed. You can see a complete list of these HTTP header fields here.
One of the headers sent by the web server is Content-length, which indicates how many bytes the rest of the page is going to send. So let's say for example that content length is 100 bytes but the server only sends 74 bytes (it may be valid HTML, but the length does not match the content length indicated)
Since the web server only sent 74 bytes and the crawler expected 100 bytes the crawler sees a TCP close port error because it is trying to read the number of bytes that the webserver said it was going to send. So you get an 803 error.
Now browsers don't care when a mismatch like this happens because Content-length is an outdated component for modern browsers, but Roger Mozbot (the Moz crawler, identified in your logs as RogerBot) is on a mission to show you any errors that might be occurring. So Roger is configured to detect and report such errors.
The degree to which an 803 error will adversely affect crawl efficiency for search engine bots such as Googlebot, Bingbot and others will vary, but the fundamental problem with all 8xx errors is that they result from violations of the underlying HTTP or HTTPS protocol. The crawler expects all responses it receives to conform to the HTTP protocol and will typically throw an exception when encountering a protocol-violating response.
In the same way that 1xx and 2xx errors generally indicate a badly-misconfigured site, fixing them should be a priority to ensure that the site can be crawled effectively. It is worth noting here that bingbot is well known for being highly sensitive to technical errors.
So what makes the mismatch happen?
The problem could be originating from the website itself (page code), the server, or the web server. There are two broad sources:
- Crappy code
- Buggy server
I'm afraid you will need to get a tech who understands this type of problem to work through each of these possibilities to isolate and resolve the root cause.
The Moz Resource Guide on HTTP Errors in Crawl Reports is also worth a read in case Roger encounters any other infrequently seen errors.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 error for unknown URL that Moz is finding in our blog
I'm receiving 404 errors on my site crawl for messinastaffing.com. They seem to be generating only from our blog posts which sit on Hubspot. I've searched high and low and can't identify why our site URL is being added at the end - I've tried every link in our blog and cannot repeat the error the crawl is finding. For instance: Referer is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/ 404 error is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/www.messinastaffing.com I agree that the 404 error URL doesn't exist but I can't identify where Moz is finding it. I have approximately 75 of these errors - one for every blog on our site. Beth Morley Vice President, Operations Messina Group Staffing Solutions
Moz Pro | | MessinaGroup
(847) 692-0613 www.messinastaffing.com0 -
Could my Crawl Error Report be wrong?
HI there, I am using Yoast SEO plugin on a wordpress website. I am currently showing 70 Med priority crawl errors 'missing meta description' on my Moz pro account. This number of missing meta descriptions has increased over the last 6 weeks. But every single page / post / tag / category has both the title and meta description completed via Yoast. I requested a google bot to crawl the site a few weeks ago as thought it perhaps wasn't been crawled and updated. Any idea what the issue might be? Could the Moz report be incorrect for some reason? Or could something be blocking Moz / Google from seeing the Yoast plugin?
Moz Pro | | skehoe0 -
What is the best approach to handling 404 errors?
Hello All - I'm a new here and working on the SEO on my site www.shoottokyo.com. When I am finding 4xx (Client Errors) what is the best way to deal with them? I am finding an error like this for example: http://shoottokyo.com/2010/11/28/technology-and-karma/ This may have been caused when I updated my permalinks from shoottokyo.com/2011/09/postname to shoottokyo.com/postname. I was using the plug in Permalinks moved permanently to fix them. Sometimes I am able to find http://shoottokyo.com/a-very-long-week/www.newscafe.jp and I can tell that I simply have a bad link to News Cafe and I can go to the post and correct it but in the case of the first one I can't find out where the crawler even found the problem. I'm using Wordpress. Is it best to just use a plugin like 'Redirection' to move the rest that have errors where I cannot find the source of the issue? Thanks Dave
Moz Pro | | ShootTokyo0 -
404 Errors generating in WP
Our crawl reports are generating back several 404 errors for pages with urls that look like: /category/consulting/page/5/ The tag changes, the page number changes, but the result is always the same: A big glaring 404. Our sites are built on WordPress Multi-site, and I am fairly certain this issue is on the WP end, but I can't figure out why it is generating pages out to infinity, essentially, from the tags and categories. It is worse on some sites than others, but is happening across the board (my initial concern was that it might be a theme issue, but that does not seem to be the case). If anyone has run into this issue and knows a fix, you're insight would be greatly appreciated. Thanks!
Moz Pro | | SIXSEO0 -
How can I prevent errors of duplicate page content generated by my tags from my wordpress on-site blog platform?
When I add meta data and a canonical reference to my blog tags for my on-site blog which works using a wordpress.org template, Roger generates errors of duplicate content. How can I avoid this problem? I want to use up to 5 tags per post, with the same canonical reference and each campaign scan generates errors/warnings for me!
Moz Pro | | ZoeAlexander0 -
Title Missing or Empty 6000+ errors
This seems one issue that can affect my curent rank of site. I own http://www.funnygusta.com and i saw yesterday a drop in traffic from 6000uniques to 3500. Yesterday also i received a new crawl report from seomoz. There was like more than 6000 title missing or empty errors. This have a direct corelation between rank and these errors? Also its weird that i use title on all posts also i use seo plugin by yoast and titles are automatic filled. Any ideea from where to begin? thanks
Moz Pro | | xplicit0 -
SEOmoz showing crawl errors but webmastertools says no errors, need help!
Hi this is my first question and i couldnt find a similar question on here. basically i have a clients website that is showing 150 duplicate page titles and content errors plus others. SEOmoz analysis is showing me for example is 3 duplicate hompage URLS: 1.www.domain.com 2.domain.com 3.www.domain.com/index.html all 3 are the same page. after explaining to the guy (who built the website) the errors, he ensured me that the main URL is URl 1. and the other 2 are 301 redirects. however SEOmoz analysis doesnt seem to change the results and webmastertools doesnt seem to show any errors at all. also if i try all 3 URL's there are no redirects to URL 1. any help or clarity would be awesome! Thanks e-bob
Moz Pro | | bobsnowzell0 -
When using Wordpress, why does it show up so many meta data errors on here and on google webmaster? A list of do's would be good?
Using wordpress is a good CMS, yet there sre so many meta data errors occuring, is there a list of what to look out for and good practices when using wordpress?
Moz Pro | | CosmikCarrot0