406 errors
-
Just started seeing 406 errors on our last crawl (all jpg related). Seomoz found 670 of these on my site when there were 0 before.
I have checked the MIME and everything seems to be in the right order. So could it be that Seomoz-crawler is showing errors that aren't really errors?
-
Yeah, got 5,000 or so. Glad to see this thread because I had a mini heart attack there. The only other time I'd seen 406 come up was related to a phishing/ad/virus filtering so I thought maybe we had an ad-spewing virus or something.
-
Same here don't worry about it... Got around 120 of these errors last week, and they are now gone. Kind of frustrating trying to see those errors and warning down to 0!
-
I figured as much Keri; thank you for the reply and confirmation.
Thanks,
Marty
-
I believe the cause is on our end. We made Roger more picky about this type of thing, but are telling him to loosen up today and he should stop bugging you about it as much on the next crawl.
Keri
-
We just started noticing the 406 errors on this last crawl as well; ours occur on our PDF docs and RSS, but like your case, everything looks in order.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
.htaccess probelem causing 605 Error?
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK). Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag Here's the code from my htaccess, is this causing the problem? RewriteEngine on
Technical SEO | | Stevie-G
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L] BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress Thanks for any advice you can offer!0 -
Www vs non www - Crawl Error 902
I have just taken over admin of my company website and I have been confronted with crawl error 902 on the existing campaign that has been running for years in Moz. This seems like an intermittent problem. I have searched and tried to go over many of the other solutions and non of them seem to help. The campaign is currently set-up with the url http://companywebsite.co.uk when I tried to do a Moz manual crawl using this URL I got an error message. I changed the link to crawl to http://www.companywebsite.co.uk and the crawl went off without a hitch and im currently waiting on the results. From testing I now know that if i go to the non-www version of my companies website then nothing happens it never loads. But if I go to the www version then it loads right away. I know for SEO you only want 1 of these URLS so you dont have duplicate content. But i thought the non-www should redirect to the www version. Not just be completely missing. I tried to set-up a new campaign with the defaults URL being the www version but Moz automatically changed it to the non-www version. It seems a cannot set up a new campaign with it automatically crawling the www version. Does it sound like im out the right path to finding this cause? Or can somebody else offer up a solution? Many thanks,
Technical SEO | | ATP
Ben .0 -
Salvaging links from WMT “Crawl Errors” list?
When someone links to your website, but makes a typo while doing it, those broken inbound links will show up in Google Webmaster Tools in the Crawl Errors section as “Not Found”. Often they are easy to salvage by just adding a 301 redirect in the htaccess file. But sometimes the typo is really weird, or the link source looks a little scary, and that's what I need your help with. First, let's look at the weird typo problem. If it is something easy, like they just lost the last part of the URL, ( such as www.mydomain.com/pagenam ) then I fix it in htaccess this way: RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR] RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule ^pagenam$ "http://www.mydomain.com/pagename.html" [R=301,L] But what about when the last part of the URL is really screwed up? Especially with non-text characters, like these: www.mydomain.com/pagename1.htmlsale www.mydomain.com/pagename2.htmlhttp:// www.mydomain.com/pagename3.html" www.mydomain.com/pagename4.html/ How is the htaccess Rewrite Rule typed up to send these oddballs to individual pages they were supposed to go to without the typo? Second, is there a quick and easy method or tool to tell us if a linking domain is good or spammy? I have incoming broken links from sites like these: www.webutation.net titlesaurus.com www.webstatsdomain.com www.ericksontribune.com www.addondashboard.com search.wiki.gov.cn www.mixeet.com dinasdesignsgraphics.com Your help is greatly appreciated. Thanks! Greg
Technical SEO | | GregB1230 -
Increase 404 errors or 301 redirects?
Hi all, I'm working on an e-commerce site that sells products that may only be available for a certain period of time. Eg. A product may only be selling for 1 year and then be permanently out of stock. When a product goes out of stock, the page is removed from the site regardless of any links it may have gotten over time. I am trying to figure out the best way to handle these permanently out of stock pages. At the moment, the site is set up to return a 404 page for each of these products. There are currently 600 (and increasing) instances of this appearing on Google Webmasters. I have read that too many 404 errors may have a negative impact on your site, and so thought I might 301 redirect these URLs to a more appropriate page. However I've also read that too many 301 redirects may have a negative impact on your site. I foresee this to be an issue several years down the road when the site has thousands of expired products which will result in thousands of 404 errors or 301 redirects depending on which route I take. Which would be the better route? Is there a better solution?
Technical SEO | | Oxfordcomma0 -
HTML Encoding Error
Okay, so this is driving me nuts because I should know how to find and fix this but for the life of me cannot. One of the sites I work for has a long-standing crawl error in Google WMT tools for the URL /a%3E that appears on nearly every page of the site. I know that a%3E is an improperly encoded > but I can't seem to find where exactly in the code its coming from. So I keep putting it off and coming back to it every week or two only to wrack my brain and give up on it after about an hour (since its not a priority and its not really hurting anything). The site in question is https://www.deckanddockboxes.com/ and some of the pages it can be found on are /small-trash-can.html, /Dock-Step-Storage-Bin.html, and /Standard-Dock-Box-Maxi.html (among others). I figured it was about time to ask for another set of eyes to look at this for me. Any help would be greatly appreciated. Thanks!
Technical SEO | | MikeRoberts0 -
How do you fix soft 404 errors?
I'm getting soft 404 errors for a gallery on my site: http://chrle.us/MBKs yet when I visit the url: http://www.chrisboar.com/html/galleries/114/portfolio/weddings-1/0 the image/url is there. Not sure how to fix this. note we do redirect the site to the flash site and only use those URLs for SEO purposes. Maybe that is what is causing it. thanks
Technical SEO | | callmeed0 -
Webmaster tools...URL Errors
Hi mozzers, Quick question. Whats the best thing to do about URL errors in webmaster tools. They are all 404s that point from external sites. Many of them are junk spam sites. Should I mark them as "fixed" or just leave them. I'm hoping google is aware it's out of my control if spam sites want to link to 404s on my site. Peter
Technical SEO | | PeterM220 -
Impact of "restricted by robots" crawler error in WT
I have been wondering about this for a while now with regards to several of my sites. I am getting a list of pages that I have blocked in the robots.txt file. If I restrict Google from crawling them, then how can they consider their existence an error? In one case, I have even removed the urls from the index. And do you have any idea of the negative impact associated with these errors. And how do you suggest I remedy the situation. Thanks for the help
Technical SEO | | phogan0