My 404 page is showing a 4xx error. How can that be fixed?
-
My actual 404 page is giving a 4xx error.
The page address is http://www.ecowindchimes.com/v/404.aspIt loads fine... it is the page all 404's are directed to. Why is it showing a 404 error. The page works.
How can this be fixed?
Stephen
-
I think what you're seeing here is intentional behaviour, Stephen. It's Volusion's hack for working around the fact their system doesn't handle 404's "correctly".
Bottom line, when you see these, you still need to fix the issue with whatever URL was being sent to the 404 page, but don't worry that the 404 page itself seems to be "not found" according to its status code.
Here's an explanation for "why" this is happening, if you're interested:
Normally, when a user enters a URL that doesn't exist, the server sends back a 404 error header. In addition, the server's settings know that when sending back a 404 status because there's no such page, they should also show the server error page directly.
For a number of reasons, Volusion can't do this, so instead, they've instituted a catch-all redirect so visitors to non-existent pages get a 301-redirect to a regular website page that has been faked to look like a 404 page. Because that 404-looking page has been found and shown, it would normally have a 200 status, which means page found OK.
A little unorthodox, but OK so far as far as the user is concerned.
BUT! When a user hits a "page not found", the search engines want to get an actual 404 status error code back so they know not to index that non-existent URL. See the problem?
If the search engine gets a 200 response, it will assume that is the real page the visitor was trying to reach and will index the non-existent URL with the 404-ish looking content. Bad. So even though you - the user - can see the error page (200), Volusion has to give it a fake 404 status to give the search engines the correct information.
For a demonstration, go to this non-existent page http://www.cochranemusic.ca/oops You can see in your browser's URL bar that the page address is still http://www.cochranemusic.ca/oops even though the page itself shows the server error page content.
Now go to http://www.ecowindchimes.com/oops and notice that the URL in the address bar actually changes, because you've been forwarded to a page on your site called 404.asp. That's a real page on your website you're seeing that's been made to "look" like a server error page. Even though you've been redirected to a real (200) page, the server has to pretend it's a 404 status to mimic the correct behaviour.
Whew - that was confusing to try to explain, so let me know if it's still not clear.
Paul
P.S. To server admins: I know I've oversimplified the difference between a server's own 404 error page and an actual website page made to look like a 404. I do know the difference, but for the sake of keeping this explanation as straightforward as possible, I've glossed over it.
-
I agree you need to update your web.config file with the desired instructions!
-
I suppose that, the link path is incorrect in your web.config file
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Https pages indexed but all web pages are http - please can you offer some help?
Dear Moz Community, Please could you see what you think and offer some definite steps or advice.. I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
Web Design | | SEOguy10 -
Funnel tracking with one page check-out?
Hi Guys, I'm creating a new website with a one page checkout that follows the following steps:
Web Design | | Jerune
1. Check availability
2. Select product
2. Select additional product & Add features
3. Provide personal information
4. Order & Pay I'm researching if it is possible to track all these steps (and even steps within the steps) with Google Analytics in order to analyse checkout abandonment. The problem is only that my one-page checkout has only one URL (I want to keep it that way) and therefore can not be differentiated on URL in the Analytics funnel. To continue to the next step also the same button (in a floating cart) in used to advance. The buttons to select/choose something within one step are all different. Do you guys know how I can set this up and how detailed I can make this? For example, is it also possible to test at which field visitors leave when for example filling in their personal information? Would be great if you can help me out!0 -
Advice needed: Google crawling for single page applicartions with java script
Hi Moz community,we have a single page application (enjoywishlist.com) with a lot of content in java script light boxes. There is a lot of valuable content embedded but google can not crawl the content and we can missing out on some opportunities as a result. I was wondering if someone was able to solve a similar issue (besides moving the content from the java script to the HTML body). There appears to be a few services sprouting up to handle single page applications and crawling in google.http://getseojs.com/https://prerender.io/Did anyone use these services? Some feedback would be much appreciated!ThanksAndreas
Web Design | | AndreasD0 -
Can anyone recommend a tool that will identify unused and duplicate CSS across an entire site?
Hi all, So far I have found this one: http://unused-css.com/ It looks like it identifies unused, but perhaps not duplicates? It also has a 5,000 page limit and our site is 8,000+ pages....so we really need something that can handle a site larger than their limit. I do have Screaming Frog. Is there a way to use Screaming Frog to locate unused and duplicate CSS? Any recommendations and/or tips would be great. I am also aware of the Firefix extensions, but to my knowledge they will only do one page at a time? Thanks!
Web Design | | danatanseo0 -
404 errors
Hey everyone. Appreciate your insight on this. I just finished redesigning my website today and just published it to my server. I decided to go with a real basic html site figuring I may get better results with the search engines. I still have a bunch of optomizing to do but I have a question. Since I was using aspx it is safe to say that many sites will be linked to those old pages. In the interest of not loosing this traffic I told IIS7 to do a 302 redirect to my home page for any 404 errors. Is this the best thing to do or is their a better way? Thanks much Ron
Web Design | | bsofttech0 -
Can anyone rcommend a UK Hosting company
As some of you may have seen in an earlier post, i have had problems with the speed of my site. After good advice i got an expert on board who done tests and found that my server (hosting company) was taking a long time to answer, on some occasions it was taking seven seconds. I have tried to get the hosting company to listen and sort the problem out but they are not interested and keep trying to sell me other things to get the site faster, i am already on a dedicated server. So now I am looking for a UK hosting company who offer good service and would be grateful if anyone could recommend some on here so i can speak to them, as i want to get my sites moves a.s.a.p many thanks
Web Design | | ClaireH-1848860 -
Mobile Site Pages: Word Count Help
Hi there I am doing a mobile website for a client and they asked me what the dieal word count would be per page. They are SEO conciosu but we are not doing SEO on this site. I would just like to know a general rule of thumb. Regards Stef
Web Design | | stefanok0 -
Best way of conserving link juice from non important pages
If I have a bunch of non important pages on my website which are of little use in the SE's index - IE contact us pages, pages which are near duplicate and conflict with KW's targetting other pages etc, what is the best way of retaining the link juice that would normally be passed to these pages? Most recent discussion I have read has said that with nofollow you effectively just loose link juice, as opposed to conserving it, so that doesn't seem a great option. If I do "noindex" on these pages, would that conserve the link juice in the site, or again would it be just lost? It seems quite a tricky situation as many pages are legitimate for customer usability, but are not worth having in the SE's index and you better off consolidating link juice - so it seems you are getting penilised for making something "for users". Thanks
Web Design | | James770