Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Optimization Error
Hi, I am trying to track a page optimization feature for one of my project, https://www.writemyessay247.com for keyword: write my essay but i keep getting this below error: "Page Optimization Error There was a problem loading this page. Please make sure the page is loading properly and that our user-agent, rogerbot, is not blocked from accessing this page." I checked robots.txt file, it all looks fine. Not sure what is the problem? Is it a problem with Moz or the website?
Moz Bar | | j.rahimi19900 -
Why isn't the Moz crawler getting all of my item pages?
I am stumped and Moz is being terrible to work with. This site has about 40k pages 39,800 of them are item pages roughly. Moz is only finding about 2400 of my pages. So they are missing most but not all of my item pages. I do not know which item pages they are missing. The fact that they are finding about 2k but not the rest leads me to believe the crawler is struggling with pagination. The site is built on Magento 2 and uses the Amasty Layered Navigation extension. Does anyone have any ideas?
Moz Bar | | Tylerj0 -
902 Error and Page Size Limit
Hello, I am getting a 902 error when attempting to crawl one of my websites that was recently upgraded to a modern platform to be mobile friendly, https, etc. After doing some research it appears this is related to the page size. On Moz's 902 error description it states: "Pages larger than 2MB will not be crawled. For best practices, keep your page sizes to be 75k or less." It appears all pages on my site are over 2MB because Rogbot is no longer doing any crawling and not reporting issues besides the 902. This is terrible for us because we purchased MOZ to track and crawl this site specifically. There are many articles which show the average page size on the web is well over 2MB now: http://www.wired.com/2016/04/average-webpage-now-size-original-doom/ Due to that I would imagine other users have come up against this as well and I'm wondering how they handled it. I hope Moz is planning to increase the size limit on Rogbot as it seems we are on a course towards sites becoming larger and larger. Any insight or help is much appreciated!
Moz Bar | | Paul_FL0 -
804 : HTTPS (SSL) Error in Crawl Test
So I am getting this 804 Error but I have checked our Security Certificate and it looks to be just fine. In fact we have another 156 days before renewal on it. We did have some issues with this a couple months ago but it has been fixed. Now, there is a 301 from http to https and I did not start the crawl on https so I am curious if that is the issue? Just wanted to know if anybody else has seen this and if you were able to remedy it? Thanks,
Moz Bar | | DRSearchEngOpt
Chris Birkholm0 -
Moz Crawler not Identifying all Duplicate Pages
On two recent site crawls (9/27/14 and 11/4/14) for duplicate content the Moz tool did not ID the following 2 pages, which are 100% duplicate to each other: http://www.hooksandlattice.com/planter-hampton-241212.html ; Screenshot: http://screencast.com/t/DdwWroUU http://www.hooksandlattice.com/planter-hampton-721212.html ; Screenshot: http://screencast.com/t/8Lb1cJZmGrhX As I'm working feverishly to re-write and update the site (goal is ZERO duplicates) I'm finding it challenging to use the Moz tool to get the project done. Does anyone have any feedback or help they can provide for how I can identify all duplicate pages associated with my domain? Thank you! Lindsey Pfeiffer
Moz Bar | | CMC-SD0 -
Error 605
Hi I have been getting the ollowing error on my dashboard the last 6-8 weeks error 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag It does seem to be able to crawl the site so what does this mean? The only issue I see is that I don't get any crawl errors listed. Could this be a prob with moz? I had this site checked on a different account on moz and there didnt get the error! The site is http://www.copyfaxes.com Thanks
Moz Bar | | henya0 -
I got a 404 in the Crawl Test Tool Report
I, yesterday i ran an crawl on http://www.everlastinggarden.nl and i get an 404. Does anybody know why this happens? <colgroup><col width="1535"></colgroup>
Moz Bar | | IMforYou
| # ---------------------------------------- |
| Crawl Test Tool Report | Moz,http://pro.seomoz.org/tools/crawl-test |
| www.everlastinggarden.nl |
| Report created: 15 Jul 18:34 |
| # ---------------------------------------- |
| URL,Time Crawled,Title Tag,Meta Description,HTTP Status Code,Referrer,Link Count,Content-Type Header,4XX (Client Error),5XX (Server Error),Title Missing or Empty,Duplicate Page Content,URLs with Duplicate Page Content (up to 5),Duplicate Page Title,URLs with Duplicate Title Tags (up to 5),Long URL,Overly-Dynamic URL,301 (Permanent Redirect),302 (Temporary Redirect),301/302 Target,Meta Refresh,Meta Refresh Target,Title Element Too Short,Title Element Too Long,Too Many On-Page Links,Missing Meta Description Tag,Search Engine blocked by robots.txt,Meta-robots Nofollow,Blocked by X-robots,X-Robots-Tag Header,Blocked by meta-robots,Meta Robots Tag,Rel Canonical,Rel-Canonical Target,Blocking All User Agents,Blocking Google,Blocking Yahoo,Blocking Bing,Internal Links,Linking Root Domains,External Links,Page Authority |
| http://www.everlastinggarden.nl,2014,404 : Received 404 (Not Found) error response for page.,Error attempting to request page | Best regards, Jos0 -
Crwal errors : duplicate content even with canonical links
Hi I am getting some errors for duplicate content errors in my crawl report for some of our products www.....com/brand/productname1.html www.....com/section/productname1.html www.....com/productname1.html we have canonical in the header for all three pages <link rel="canonical" href="www....com productname1.html"=""></link rel="canonical" href="www....com>
Moz Bar | | phes0