4XX (Client Error)
-
How much will 5 of these errors hurt my search engine ranking for the site itself (ie: the domain) if these 5 pages have this error.
-
not sure if this is any help to anyone but I have almost the same issue but it's a 500 error and the description says:
Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/downpour/init.py", line 391, in _error failure.raiseException() File "/usr/local/lib/python2.7/site-packages/twisted/python/failure.py", line 370, in raiseException raise self.type, self.value, self.tb Error: 500 Internal Server Error
Talking to my hosting provider they said when the seomoz bot crawled my site it put my cpu usage over 25% causing the errors... We will see if this happens on Sunday.
-
One of my crawls has just completed and I see that I have 5 404 : Error messages that display the same error as quoted. I feel that I am being a little pedantic as 5 does seem petty in comparison to the numbers quoted by the other members, I would just like to know if there is something that I can do to eliminate these. Please can you advise if this is something only derived by the moz crawl itself or if it may have something to do with an external cause that I can influence?
I greatly appreciate your time.
-
Thank you!
-
We do know about the 406 errors, and we uploaded a fix for that on Tuesday. Your next crawl should not show these errors again.
Keri
-
I am experiencing exactly the same thing as alsvik-- I went from 0 406 errors to 681 in one week, having changed nothing on my site. Like him, it is PDFs and .jpgs that are generating this error, and I get EXACTLY the same error message.
Clearly, SEOmoz has changed a python script such that their bot no longer accepts these MIME types. Please correct this ASAP.
-
I just got spammed with 406 errors. Seomoz suddenly found 390 of these on my site (all png, jpg and pdf).
I have changed nothing on my site and GWT shows none of these. So i'm thinking that the Seomoz-crawler maybe doing something wrong ...
It all boils down to trust. I trust GWT (it may be slow though).
-
it is on a pdf with a link on it. The error message says:
<dt>Title</dt>
<dd>406 : Error</dd>
<dt>Meta Description</dt>
<dd>Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/downpour/init.py", line 378, in _error failure.raiseException() File "/usr/local/lib/python2.7/site-packages/twisted/python/failure.py", line 370, in raiseException raise self.type, self.value, self.tb Error: 406 Not Acceptable</dd>
<dt>Meta Robots</dt>
<dd>Not present/empty</dd>
<dt>Meta Refresh</dt>
<dd>Not present/empty</dd>
-
It's hard to quantify the impact of the 404 pages not knowing the relative size of your site.
Overall, the 404s aren't good for your SEO. You should work towards fixing the pages that are giving the error or 301 redirecting the bad urls.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How big is the problem: 404-errors as result of out of stock products?
We had a discussion about the importance of 404-errors as result of products which are out of stock. Of course this is not good, but what is the leverance in terms of importance: low-medium-high?
Technical SEO | | Digital-DMG0 -
4xx fix
Hi
Technical SEO | | gavinr
I have quite a lot of 4xx errors on our site. The 4xx occurred because I cleaned poor URLs that had commas etc in them so its the old URLs that now 4xx. There are no links to the URLs that 4xx. What is the best way of rectifying this issue of my own making?! Thanks
Gavin0 -
What is the best way to handle these duplicate page content errors?
MOZ reports these as duplicate page content errors and I'm not sure the best way to handle it. Home
Technical SEO | | ElykInnovation
http://myhjhome.com/
http://myhjhome.com/index.php Blog
http://myhjhome.com/blog/
http://myhjhome.com/blog/?author=1 Should I just create 301 redirects for these? 301 http://myhjhome.com/index.php to http://myhjhome.com/ ? 301 http://myhjhome.com/blog/?author=1 to http://myhjhome.com/ ? Or is there a better way to handle this type of duplicate page content errors? and0 -
Crawl errors: 301 (permanent redirect)
Hi, here are some questions about SEO Crawl Diagnostics. We've recently found out this 301 (permanent redirect) errors in our website and we concluded that the two factors below are the causes. 1. Some of our URLs that has no / at the end is automatically redirected to the same URL but with / at the end. 2. For SEO reasons we have designed our website in a way that when we type in a URL it will automatically redirect to a more SEO friendly URL. For example, if one of the URLs is www.example.com/b1002/, it will automatically redirect to www.example.com/banana juice/. The question is, are these so significant for our SEO and needs to be modified? One of the errors in our blog was having too many on-page links. Is this also a significant error and if so, how many on-page links are recommended from the SEO perspective? Thanks in advance.
Technical SEO | | Glassworks0 -
Client wants to distribute web content to dealers - iFrame?
I have a client who sells a product through a network of nationwide dealers. He wants to provide update-able content to these dealers so they can create sections on their websites dedicated to the product. For ex., www.dealer.com/product_XYZ. The client is thinking he'd like to provide an iframe solution to the dealers, so he can independently update the content that appears on their sites. I know iFrames are old, but are there any SEO concerns I should know about? Another option is to distribute content via HTML that has a rel=canonical command as part of the code, but then he loses the ability to centrally update all the distributed content. Are there other solutions he should consider? Thanks --
Technical SEO | | 540SEO0 -
What does this error mean?
We recently merged our Google + & Google Local pages and sent a request to Webmaster tools to connect the Google + page to our website. The message was successfully sent. However, when clicking the 'Approve or reject this request' link, the following error message appears: 'Can't find associate request' Anyone know what we are doing incorrectly? Thanks in advance for any insight.
Technical SEO | | SEOSponge0 -
Numerous 404 errors on crawl diagnostics (non existent pages)..
As new as them come to SEO so please be gentle.... I have a wordpress site setup for my photography business. Looking at my crawl diagnostics I see several 4xx (client error) alerts. These all show up to non existent pages on my site IE: | http://www.robertswanigan.com/happy-birthday-sara/109,97,105,108,116,111,58,104,116,116,112,58,47,47,109,97,105,108,116,111,58,105,110,102,111,64,114,111,98,101,114,116,115,119,97,110,105,103,97,110,46,99,111,109 | Totally lost on what could be causing this. Thanks in advance for any help!
Technical SEO | | Swanny8110 -
RSS Feed Errors in Google
We recently (2 months ago) launched RSS feeds for the category pages on our site. Last week we started seeing error pages in Webmaster Tools' Crawl Errors report pop up for feeds of old pages that have been deleted from the site, deleted from the sitemap, and not in Google's index since long before we launched the RSS feeds. Example: www.mysite.com/super-old-page/feed/ I checked and both the URL for the feed and the URL for the actual page are returning 404 statuses. www.mysite.com/super-old-page/ is also showing up in our Crawl Errors. Its been deleted for months but Webmaster Tools is very slow to remove the page from their Crawl Error report. Where is Google finding these feeds that never existed?
Technical SEO | | Hakkasan0