Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 803: Incomplete HTTP Response Received
On 5 pages on our site we've got an Error Code 803: Incomplete HTTP Response Received. We checked the network interface within the browser inspector, which did not return any errors, this would mean than there were no issues with retrieving resources from the server. In the console log there are no errors also. We also inspected the URL bar settings and can see each page is loaded under a secure connection with no errors. When using http://www.webconfs.com/http-header-check.php there was a "HTTP/1.1 301 Moved Permanently" issue. A little in the dark on what to do next. Any help would be amazing!
Moz Bar | | EdLongley0 -
Different Errors Running 2 Crawls on Effectively the Same Setup
Our developers are moving away from utilising robots.txt files due to security risks, so e have been in the process of removing them from sites. However we, and our clients still want to run Moz crawl reports as they can highlight useful information. The two sites in question sit on the same server with the same settings (in fact running on the same Magento install). We do not have a robots.txt files present (they 404), and as per Chiaryn's response here https://moz.com/community/q/without-robots-txt-no-crawling this should work fine? However for www.iconiclights.co.uk we got: 902 : Network errors prevented crawler from contacting server for page. While for www.valuelights.co.uk we got: 612 : Page banned by error response for robots.txt. These crawls were both run recently, and there was no robots.txt present. Not to mention, they are on the same setup/server etc as mentioned. Now, we have just tested this, by uploading a blank robots.txt file to see if it changed anything - but we get exactly the same errors. I have had a look, but can't find anything that really matches this on here - help would really be appreciated! Thanks!
Moz Bar | | I-COM0 -
4XX client error with email address in URL
I have an unusual situation I have never seen before and I did not set up the server for this client. The 4XX error is a string of about 74 URLs similar to this: http://www.websitename.com/about-us/info@websitename.com I will be contacting the server host as well to troubleshoot this issue. Any ideas? Thanks
Moz Bar | | EliteVenu0 -
500 errors showing up differently on moz and google wmt
Lately, I've been having the issue of a large increase in 500 errors. These errors seem to be intermittent, in other words, Google and Moz are showing that I have server 500 errors for many pages but, when I actually check the links, everything's fine. I've run tests to see if there is any virus on the server or if I have any corrupt files and as far as I can tell, there are none. I'm left with the possibility that maybe one of my plugins is causing this issue (I'm built on top of Wordpress). Moz is showing that I had nearly five hundred 500 server errors on the 12th or the 11th. On the other hand, Google shows that on the 13th I had 179 server errors and then an additional 200 for the 15th. I'm assuming Google is slow to find or report these things? I would like to know which is more reliable so that I can try to figure out which of these plugins may be causing the problem, if any or if I'm investigating this the wrong way, I'd love to have more suggestions. Thanks in advance! Sorry, the url is http://www.heartspm.com if you'd like to take a look.
Moz Bar | | GerryWeitz0 -
Moz Crawl Test Tool - SEO Web Crawler showing up with no details
So basically I have ran the Moz Crawl Test tool twice for this url "bubblingwithenergy.info" and both times the report has listed 1 URL when there is obviously a lot more if you check the site. My question is, why is the Moz Crawl only reporting 1 URL when there are heaps? Is there a possibility it is being blocked and if so what would be blocking it? This website is using a CMS called Infusion and it is based off CMSMS (CMS Made Simple). Any answers would be greatly appreciated. Cheers
Moz Bar | | KBB_Digital0 -
Error 4XX showing by SEOmoz tool
Hi, I am a SEOmoz user. Can anybody guide me how to fix 4XX errors as i got reported by "Crawl Diagnostics Summary". There are many referring URLs reporting same error. Please guide me what to do and how to fix it?? Thanks
Moz Bar | | acelerar0 -
Moz Crawl Report showing non-existent Duplicate Errors since new reporting layout
Hi Moz Community, Since Moz changed to the new style of Crawl report, we've seen a jump in duplicate errors for our site. These duplicate errors do not exist and were not present on the Crawl reports before the report change and also we have not made any changes to the flagged pages on our site since then either. When you download the report data in csv it appears that the Moz report is mixing up data for two or more pages on the site. e.g.in csv for 'Page1' data, it will show the meta description for 'Page2' and 'Page2' shows that for 'Page1', so this then gets flagged as duplicate, however looking at the actual Meta description assigned onsite, both Page 1 and Page 2 are completely unique. Has anyone else experienced this and Moz Team - are you looking into this? Thanks, V
Moz Bar | | WWTeam1 -
Blocked Production Site from Search Engines - How to get it Crawled by Moz Crawler
I have an 'under development' site hosted, (which is an exact replica of live site as working on to add new functionalities & modules) - but its password protected, excluded from robots.txt (Disallow) & also marked noindex on all pages in the index - so that Googlebot & other Search Engines can not crawl the site At present the development work is almost 95% completed., Now - feel like to crawl the site through SEOMOZ Roger Bot - to know the errors and all indexed urls by Rogerbot. What's the best way to get Moz Bot crawl the site - but simultaneously continue it blocking its access to Search Engines I have gone through - https://support.google.com/webmasters/answer/93708?hl=en, it says a) Save it in a password-protected directory. Googlebot and other spiders won't be able to access the content- But this way Moz will also not be able to crawl the site b) Use a robots.txt to control access to files and directories on your server - However it also says - It's important to note that even if you use a robots.txt file to block spiders from crawling content on your site, Google could discover it in other ways and add it to our index. c) Use a noindex meta tag to prevent content from appearing in our search results - It also says that a link to the page can still appear in their search results. Because we have to crawl your page in order to see the noindex tag, there's a small chance that Googlebot won't see and respect the noindex meta tag Password Protected thus seems the best way to continue blocking. However, continuing with it will also block Moz bot to crawl the site. Any suggestions Thanks
Moz Bar | | Modi0