Incorrect crawl errors
-
A crawl of my websites has indicated that there are some 5XX server errors on my website:
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting ServerThe five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?
-
Thanks for this! I didn't think to check the server logs. I'll have them checked and make sure that it's not blocking Moz out from the crawl. We have thousands of URL's on our website and quite a strict security policy on the server - so I imagine Moz has probably been blocked out.
Thanks,
Liam
-
Hi,
These error code's are Moz custom codes to list errors it encounters when crawling your site - it's quite possible that when you check these pages in a browser that they load fine (and that google bot is able to crawl them as well).
You can find the full list of crawl errors here: https://moz.com/help/guides/search-overview/crawl-diagnostics/errors-in-crawl-reports. You could try to check these url's with a tool like web-sniffer.net to check the responses and check the configuration of your server.
-
608 errors: Home page not decodable as specified Content-Encoding
The server response headers indicated the response used gzip or deflate encoding but our crawler could not understand the encoding used. To resolve 608 errors, fix your site server so that it properly encodes the responses it sends. -
803 errors: Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data. -
902 errors: Unable to contact server
The crawler resolved an IP address from the host name but failed to connect at port 80 for that address. This error may occur when a site blocks Moz's IP address ranges. Please make sure you're not blocking AWS.
Without the actual url's it's impossible to guess what is happening in your specific case.
Hope this helps,
Dirk
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my website Backlinks not getting Crawl by Moz?
Hi, I've the query related to backlinks of my website. Some of the high authority sites give backlinks to my website but these links are still not showing in Moz. My website is 6 months old and also continuously getting backlinks from high authority sites but still these are not showing on Moz and also not improving DA and PA of my website. I've also attached the screenshot of Moz link explorer results please check and guide me what to do with website so Moz will consider it and give some authority. And also guide me How much Moz takes time to crawl website backlinks and shown in their link explorer. Website URL: https://www.welderexpert.com Moz Expert Suggestions needed. Thanks, overview?site=welderexpert.com⌖=domain
Link Explorer | | KOidue0 -
Crawl a node js page - Why can I only see my frontpage?
Hi When i am trying to crawl my website ( https://www.doorot.com/ ) it can only find my frontpage. It's a node js page. Any one had the same problem or know how to crawl my site in order to see all my pages? Kasper
Link Explorer | | KasperClio1 -
Crawl Errors on a Wordpress Website
I am getting a 902 error, "Network Errors Prevented Crawler from Contacting Server" when requesting a site crawl on my wordpress website, https://www.systemoneservices.com. I think the error may be related to site speed and caching, but request a second opinion and potential solutions. Thanks, Rich
Link Explorer | | rweede0 -
Sufficient Words in Content error, despite having more than 300 words
My client has just moved to a new website, and I receive "Sufficient Words in Content" error over all website pages , although there are much more than 300 words in those pages. for example: https://www.assuta.co.il/category/assuta_sperm_bank/ https://www.assuta.co.il/category/international_bank_sperm_donor/ I also see warnings for "Exact Keyword Used in Document at Least Once" although there is use of them in the pages.. The question is why can't MOZ crawler see the pages contents?
Link Explorer | | michalos12210 -
804 error preventing website being crawled
Hi For both subdomains https://us.sagepub.com and https://uk.sagepub.com crawling is being prevented by a 804 error. I can't see any reason why this should be so as all content is served through https. Thanks
Link Explorer | | philmoorse0 -
Is there some way to tell the Moz crawler not to crawl URL's with particular dynamic tags such as "?redirect-to:http//" ?
We are encountering an issue where the crawler is finding a ton of pages from our wordpress login url that has this dynamic tag in it to kinds of different blog entries. It's madness. I can't figure out what is causing these URLs to generate to be crawled in the first place! Does this sound familiar to anyone out there, any constructive suggestions? Robots text or maybe meta robots tags that would resolve this crawl issue?
Link Explorer | | RegistrarCorp0 -
Moz can't crawl domain due to IP Geo redirect loop
Hi, I'm trying to crawl our domain www.salvationarmy.org.au via my Moz account and it only ever returns results for one page when it should be crawling more than 3,000 pages. In talking to support, they have said that because of the redirect we have in place it is creating a 302 loop and therefore not delivering results. Usually in this case I would obtain Moz's IP addresses and add them to the redirect settings as an exception, but Moz have said they use cloud-based services for crawling so the IPs change all the time. Does anyone have any idea how to solve this issue? At this point I've paid for a year's subscription to a product I can't use. Thanks, Mel
Link Explorer | | SalvationArmy0