How do fix an 803 Error?
-
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare.
Is there anything that I should do or do I wait a week and hope it disappears?
803 Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
-
Kristina from Moz's Help Team here. Here is the working link to our Crawl Errors resource guide if you're still needing it!
https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics/errors-in-crawl-reports
-
It would be great to read more about this issue here. I would love to debug/troubleshoot the 803 Errors, but I have no idea where to start. One problem: It's not possible to adjust the crawl-speed/delay of the moz-bot so I can't tell it the bot is the problem or not. Any suggestion out there how to debug a 803 crawl error?
TIA,
Jörg
-
Hi Sha,
The first link with the complete list is not working. I would love to access it. Where can I find the link?
Thanks in advance, Michiel
-
Same here, I found error on 803 in an image, What to do now? Can you pls help?
Thnaks
-
Hi,
Found a 803 Error in an image. Does that mean I should compress/improve somehow the image, or is it a web server error?
Thank you,
-
So if it is a standard wordpress page would the issue likely to be with the wordpress code - or my on-page content?
-
Hi Zippy-Bungle,
To understand first why the 803 error was reported:
When a page is called, the web server sends header details of what's to be displayed. You can see a complete list of these HTTP header fields here.
One of the headers sent by the web server is Content-length, which indicates how many bytes the rest of the page is going to send. So let's say for example that content length is 100 bytes but the server only sends 74 bytes (it may be valid HTML, but the length does not match the content length indicated)
Since the web server only sent 74 bytes and the crawler expected 100 bytes the crawler sees a TCP close port error because it is trying to read the number of bytes that the webserver said it was going to send. So you get an 803 error.
Now browsers don't care when a mismatch like this happens because Content-length is an outdated component for modern browsers, but Roger Mozbot (the Moz crawler, identified in your logs as RogerBot) is on a mission to show you any errors that might be occurring. So Roger is configured to detect and report such errors.
The degree to which an 803 error will adversely affect crawl efficiency for search engine bots such as Googlebot, Bingbot and others will vary, but the fundamental problem with all 8xx errors is that they result from violations of the underlying HTTP or HTTPS protocol. The crawler expects all responses it receives to conform to the HTTP protocol and will typically throw an exception when encountering a protocol-violating response.
In the same way that 1xx and 2xx errors generally indicate a badly-misconfigured site, fixing them should be a priority to ensure that the site can be crawled effectively. It is worth noting here that bingbot is well known for being highly sensitive to technical errors.
So what makes the mismatch happen?
The problem could be originating from the website itself (page code), the server, or the web server. There are two broad sources:
- Crappy code
- Buggy server
I'm afraid you will need to get a tech who understands this type of problem to work through each of these possibilities to isolate and resolve the root cause.
The Moz Resource Guide on HTTP Errors in Crawl Reports is also worth a read in case Roger encounters any other infrequently seen errors.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Perplexed - Errors increasing, moz rank dropping, conflicting data from other sources. Please Help.
Hi there, I'm in a bit of a bind and could seriously do with some help. For the past month I've been working with a client to resolve errors onsite. In that period my moz rank has plummeted and my errors (dupe title & content) have increased dramatically. The correlation makes sense, the increase - not so much. Here's why, screaming frog is reporting that the majority of duplicate titles have been removed or dealt with. I've also got the assurance of the developer that the suggested changes are being implemented. Unfortunately, I'm not familiar with the CMS - umbraco - and this is the first time I'm working with this particular developer, so I'm not sure how to gauge progress without using moz tools. So, here are my immediate questions: 1. Why would I get different results from Screaming Frog and Rogerbot? 2. Has anyone here been in a similar situation and could they recommend areas to watch for errors creeping in? 3. Is there a way of identifying which errors have occurred when? Narrowing it down to the week each error occurred would be good enough. 4. Is anybody aware of any inherent SEO flaws in umbraco or common SEO mistakes made using the CMS? 5. is there anything I could provide you with to help you help me? 6. Do you have a suggestion? I'm pretty stuck. Thanks in advance.
Moz Pro | | KJDMedia0 -
Why does it keep displaying br tags and claiming 404 errors on like 4 of my URL's for all my Wordpress sites?
Is anyone else having the same issue? These errors don't actually exist and i think it has something to do with wordpress - how can i fix this?
Moz Pro | | MillerPR0 -
4XX (Client Error) Report - Referrer URL Feature Request
Why not allow me to click on the individual 404 errors in the 4XX (Client Error) report and allow me to actually see the where the broken links are coming from (as a hyperlinked url) so I can locate/click and fix them? Providing the referring url w/hyperlink will stop me from having to jump between the report and searching in site explorer, or downloading the complete error report and sorting.
Moz Pro | | WaterGuy0 -
Error on SEOMoz When Trying to Track Website. Please Advise
Hi, I'm trying to start a new campaign for a root domain, but I'm getting the "Roger found an error" and am not sure what to make of it. Error #1: "You've decided to set up a root domain campaign, but entered the subdomain path: www.siteurl.com. Don't worry, we'll switch that for you and crawl everything on the subdomain: www.siteurl.com. If you meant to set this up to only crawl pages in the root domain, click 'Go back and Change" and enter a root domain URL in step 1." Error #2: "Oops! The root domain siteurl.com redirects to a domain that is not within the specified root domain (www.siteurl.com). This will cause us to stop crawling as the first discovered page falls outside of the root domain you've defined. Please make sure you enter a root domain that resolves to a page that is under the root domain." What does this mean? Is there something I am doing wrong? The first error is what returned when I input www.siteurl.com. The second was returned when I put just siteurl.com. I didn't put up the exact URL for privacy reasons, but if you really do want to help me out, PM me and I can give you the real URL. Thanks in advance!
Moz Pro | | locallyrank0 -
Weird client errors . . .
SeoMoz is reporting a number of weird client errors. The 404 links all look like the following: http://www.bluelinkerp.com/http%3A/www.bluelinkerp.com/corporate/cases/Nella.asp What might be causing these weird links to be picked up? I couldn't find any way within the SEOmoz interface to track down the source of these links . . .
Moz Pro | | BlueLinkERP0 -
Duplicate pages with canonical links still show as errors
On our CMS, there are duplicate pages such as /news, /news/, /news?page=1, /news/?page=1. From an SEO perspective, I'm not too worried, because I guess Google is pretty capable of sorting this out, but to be on the safe side, I've added canonical links. /news itself has no link, but all the other variants have links to "/news". (And if you go wild and add a bunch of random meaningless parameters, creating /news/?page=1&jim=jam&foo=bar&this=that, we will laugh at you and generate a canonical link back to "/news". We're clever like that.) So far so good. And everything appears to work fine. But SEOMoz is still flagging up errors about duplicate titles and duplicate content. If you click in, you'll see a "Note" on each error, showing that SEOMoz has found the canonical link. So SEOMoz knows the duplication isn't a problem, as we're using canonical links exactly the way they're supposed to be used, and yet is still flagging it as an error. Is this something I should be concerned about, or is it just a bug in SEOMoz?
Moz Pro | | LockyDotser0 -
Errors went from 2420 to ZERO
Of course this happened without my intervention, i don't know why but seomoz is reporting 0 errors.
Moz Pro | | iFix0