Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Optimization Error
Hi, I am trying to track a page optimization feature for one of my project, https://www.writemyessay247.com for keyword: write my essay but i keep getting this below error: "Page Optimization Error There was a problem loading this page. Please make sure the page is loading properly and that our user-agent, rogerbot, is not blocked from accessing this page." I checked robots.txt file, it all looks fine. Not sure what is the problem? Is it a problem with Moz or the website?
Moz Bar | | j.rahimi19900 -
Moz Crawl Report Increase in Errors?
Has anyone else noticed a huge increase over the past couple weeks in crawl issues in their dashboards? Without being able to see historical data week over week, I can't tell what's been added. Is this some update with the tool? I'm not seeing any health issues with this feature on the Moz Health page, it just seems strange that I'm seeing this across all our accounts.
Moz Bar | | WWWSEO0 -
Find SEO errors
Hi, I have a Moz Pro account. Is there any way to automatically find images without ALT tag, and also noindex/nofollow pages? Cheers,
Moz Bar | | viatrading10 -
4XX client error with email address in URL
I have an unusual situation I have never seen before and I did not set up the server for this client. The 4XX error is a string of about 74 URLs similar to this: http://www.websitename.com/about-us/info@websitename.com I will be contacting the server host as well to troubleshoot this issue. Any ideas? Thanks
Moz Bar | | EliteVenu0 -
MOZ crawler has been finding a lot of 803 and 804 errors
During last 3 weeks MOZ crawler has been finding a lot of 803 and 804 errors. Meanwhile all pages seem to be working fine. What could cause it?
Moz Bar | | Paruyr0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
Error 4XX showing by SEOmoz tool
Hi, I am a SEOmoz user. Can anybody guide me how to fix 4XX errors as i got reported by "Crawl Diagnostics Summary". There are many referring URLs reporting same error. Please guide me what to do and how to fix it?? Thanks
Moz Bar | | acelerar0 -
Big dip in errors last week?
Hi, I have a client who has seen a big dip in their errors, warnings and notices - pretty much consistently across all - then they popped right back up the next week. Is it possible it was a crawl error? What is the best approach to trouble shoot and find out why this happened? As far as they know, nothing major changed on their end. Thanks!
Moz Bar | | Becky_Converge0