Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error in Duplicate Content Being Reported - Pages Aren't Actually Duplicates
The recent crawl of one of our sites revealed a high number of duplicate content issues. However, when I viewed the report for pages with duplicate content I noticed almost all of them are not duplicates. For example, these two pages are marked as dupes:
Moz Bar | | M_D_Golden_Peak
https://www.writersstore.com/publishers/hollywood-creative-directory
https://www.writersstore.com/authors/g-miki-hayden These are thin as far as content goes but definitely not duplicates. Any recommendations or ways to adjust the settings so that these false positives aren't clogging up our site crawl report?0 -
Rogerbot will not crawl my site! Site URL is https but keep getting and error that homepage (http) can not be accessed. I set up a second campaign to alter the target url to the newer https version but still getting the same error! What can I do?
Site URL is https but keep getting and error that homepage (http://www.flogas.co.uk/) can not be accessed. I set up a second campaign to alter the target url to the newer https://www.flogas.co.uk/ version but still getting the same error! What can I do? I want to use Moz for everything rather than continuing to use a separate auditing tool!
Moz Bar | | digitalascend0 -
Moz crawler only crawls one page?!
Hello there, I'm using Moz for a while and I'm very pleased with the tool and community. But for the first time I encountered a problem. We are trying to run a crawler for a client's website but only one page (only the homepage) was crawled. We tried to do a test on a more detailed level (maybe there is something wrong with the homepage). My campaign test's crawl came back for the Producten folder (level deeper than homepage), and it was also only a 1 page crawl with a 200 status. I did look at the robots.txt file now, and it is very restrictive, but there is nothing that I can clearly see that would explain why the crawl isn't working. Hopefully someone can point us at the right direction. Thanks in advance, Jeremy
Moz Bar | | mediaxplain.nl0 -
500 errors showing up differently on moz and google wmt
Lately, I've been having the issue of a large increase in 500 errors. These errors seem to be intermittent, in other words, Google and Moz are showing that I have server 500 errors for many pages but, when I actually check the links, everything's fine. I've run tests to see if there is any virus on the server or if I have any corrupt files and as far as I can tell, there are none. I'm left with the possibility that maybe one of my plugins is causing this issue (I'm built on top of Wordpress). Moz is showing that I had nearly five hundred 500 server errors on the 12th or the 11th. On the other hand, Google shows that on the 13th I had 179 server errors and then an additional 200 for the 15th. I'm assuming Google is slow to find or report these things? I would like to know which is more reliable so that I can try to figure out which of these plugins may be causing the problem, if any or if I'm investigating this the wrong way, I'd love to have more suggestions. Thanks in advance! Sorry, the url is http://www.heartspm.com if you'd like to take a look.
Moz Bar | | GerryWeitz0 -
Signed up for moz reports - have received Moz error report - need someone who is capable to take report and perform cleanup edits within Joomla site?
Looking for someone in the US - please contact me at Mary@workingwebsolutions.com If available and interested in task. Thanks Mary
Moz Bar | | PortlandWebDesign0 -
Moz Crawl Test says pages have no internal links
Greetings, I am working on a website, https://www.nasscoinc.com, and ran a Moz Crawl Test on it. According to the crawl test, only 2 of the website's hundreds of pages are receiving internal links. When I run a similar test on the site using Screaming Frog, I see that most of the pages have at least one internal link. I'm wondering if anyone has seen this before with the crawl test; and there is a way to get the crawl test to see the internal links? Thanks!
Moz Bar | | TopFloor0 -
Crawl Diagnostics: Exlude known errors and others that have been detected by mistake? New moz analytics feature?
I'm curious if the new moz analytics will have the feature (filter) to exclude known errors from the crwal diagnostics. For example, the attached screenshot shows the URL as 404 Error, but it works fine: http://en.steag.com.br/references/owners-engineering-services-gas-treatment-ogx.php To maintain a better overview which errors can't be solved (so I just would like to mark them as "don't take this URL into account...") I will not try to fix them again next time. On the other hand I have hundreds of errors generated by forums or by the cms that I can not resolve on my own. Also these kind of crawl errors I would like to filter away and categorize like "errors to see later with a specialist". Will this come with the new moz analytics? Anyway is there a list that shows which new features will still be implemented? knPGBZA.png?1
Moz Bar | | inlinear0 -
Dupe content report showing in 'Errors' section when surely should be in 'Warnings' section ?
Why is the dupe content info showing in errors and not warnings ? Since if dupe content can get your site penalised (as per Panda) or worse banned, surely it should be in that section of reports ? Cheers
Moz Bar | | Dan-Lawrence
Dan0