Status Code 404: But why?
-
Google Web Master Tool reported me that I have several 404 staus code.,
First they were 2, after 4..6 and 10, right now. Every time I add a new page.
I've got a no CMS managed website. After old website was been deleted, I installed Wordpress, created new page and deleted and blocked (robots.txt) old page.
Infact all page not found don't exist!!! (Pic: Page not found).
The strange thing is that no pages link to those 404 pages (All Wordpress Created page are new!!!). Seomoz doesn't report me any 404 error (Pic 3)
I controlled all my pages:
- No "strange" link in any pages
- No link reported by Seomoz tool
Bu why GWMT reports me that one? How can I risolve that problem?
I'm going crazy!!!Regards
Antonio -
Antonio,
Ryan has explained this perfectly.
For a more detailed explanation of methods for controlling page indexing, you could read this post on Restricting Robot Access for Improved SEO
It seems from your comments and questions about 301 redirects, that there is some confusion on how they work and why we use them.
A 301 redirect is an instruction to the server which is most commonly done by adding a .htaccess file (if you are using an Apache server).
The .htaccess file is read by the server when it receives a request to serve any page on the site. The server reads each rule in the file and checks to see if the rule matches the existing situation. When a rule matches, the server carries out the action required. If no rule matches, then the server proceeds to serve the reqested page.
So, in Ryan's first example above, there would be a line of code in the .htaccess file that basically says to the server IF the page requested is /apples, send the request to /granny-smith-apples using a 301 (Permanent) Redirect.
The intent of using a 301 Redirect is to achieve two things:
- To prevent loss of traffic and offer the visitor an alternative landing page.
- To send a signal to Search Engines that the old page should be removed from the index and replaced with the new page.
The 301 Redirect is referred to as Permanent for this reason. Once the 301 Redirect is recognized and acted upon by the search engine, the page will be permanently removed from the index.
In contrast, the request to remove a page via Google WMT is a "moment in time" option. The page can possibly be re-indexed because it is accessible to crawlers via an external link from another site (unless you use the noindex meta tag instead of robots.txt). Then you would need to resubmit a removal request.
I hope this makes clearer the reasons for my response - basically, the methods you have used are not "closing the door" on the issue, but leaving the possibility open for it to occur again.
Sha
-
But I think, tell me I'm right, that robots.txt is better than noindex tag.
Definitely not. The opposite is true.
A no-index tag tells search engines not to index the page. The content will not be considered as duplicate anymore. But the search engines can still crawl the page and follow all the links. This allows your PR to flow naturally throughout your site. This also allows search engines to naturally read any changes in meta tags. A robots.txt disallow prevents the search engine from looking at any of the page's code. Think of it as a locked door. The crawler cannot read any meta tags and any PR from your site that flows to the page simply dies.
Do I need "real" page to create a 301 redirect?
No. Let's look at a redirect from both ends.
Example 1 - you delete the /apples page from your site. The /apples page no longer exists. After reviewing your site you decide the best replacement page would be the /granny-smith-apples page. Solution: a 301 redirect from the non-existent /apples page to the /granny-smith-apples page.
Example 2 - you delete the /apples page from your site. You no longer carry any form of apples but you do carry other fruit. After some thought you decide to redirect to the /fruit/ category page. Solution: a 301 redirect from the non-existent /apples page to the /fruit/ category page.
Example 3 - you delete the /apples page from your site but you no longer carry anything similar. You can decide to let the page 404. A 404 error is a natural part of the internet. Examine your 404 page to ensure it is helpful. Ideally it should contain your normal site navigation, a site search field and a friendly "sorry the page you are looking for is no longer available" message.
Since you asked about existence of redirected pages, you can actually redirect to a page that does not exist. You could perform a 301 from /apples to a non-existent /apples2 page. When this happens it is almost always due to user error by the person who added the redirect. When that happens anyone who tries to reach the /apples page will be redirected to the non-existent /apples2 page and therefore receive a 404 error.
-
Ryan,
what you say is right: The best robots.txt file is a blank one. But I think, tell me I'm right, that robots.txt is better than noindex tag.
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? Yes.If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist...
**Ok. But Do I need "real" page to create a 301 redirect?
I deleted those one.So, to resolve my problem must i redirect old page to most relevant page?**
-
Greenman,
I have a simple rule I learned over time. NEVER EVER EVER EVER use robots.txt unless there is absolutely no other method possible to achieve the required result. It is simply bad SEO and will cause problems. The best robots.txt file is a blank one.
When you use CMS software like WP, then it is required for some areas but it's use should be minimized.
How can I add a 301 redirect to a page that doesn't exit?
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist, to the most relevant page that does exist.
It's a bit of semantics but if you chose to do such, you can create 301s from or to pages that don't exist.
-
Greenman,
As I suspected many of the dates of the bad URLs are old, some even being from 2010. I took a look at your home page specifically checking for the URL you highlighted in red on the 4th image. It is not present.
My belief is your issue has been resolved by the changes you made. I recommend you continue to monitor WMT for any NEW errors. If you see any fresh dates with 404, that would be a concern which should be investigated. Otherwise the problem appears to be resolved.
I also very much support Sha's reply above.
-
Hi Sha, thanks for your answer.
1.** robots.txt is not the most reliable method of ensuring that pages are not indexed**
If you use tag noindex, spider will acces to your page but it will not get enough information. So, page will be semi-indexed.
My old pages ware been removed, no indexed (by robots) and I sent remove request to Google. No problem with that, no result on the SERP.
2. So, the simple answer is that there are links out there which still point to your old pages...does not mean that they don't exist.
You can see by screenshot the link's source: just my old "ghost" pages. No other sources.
3. If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page.
How can I add a 301 redirect to a page that doesn't exit?
Old page -> 301 -> New page (Home?). But Old page doesn't exit in Wordpress!!!**I don't want stop 404, I want remove link that bring to deleted pages. **
-
-
My gut feeling is that a catch al 301, is not a good thing, I cant give you any evidence, just a bit of reasoning and gut feeling.
I always try to put myself in the SE shoes, would i think a lot of 301's pointing to one not relavant page is natual? and would it be hard to detect? I would answer No and No. Although i used to do it to my home page a while ago, I guess i had a different gut feeling back then
-
Hi Greenman,
I would guess that your problem is most likely caused by the fact that you have used the robots.txt method to block the pages you removed.
robots.txt is not the most reliable method of ensuring that pages are not indexed. Even though robots.txt tells bots not to crawl a page, Google has openly stated that if a page is found through an external link from another site, they can be crawled and indexed.
The most effective way to block pages is to use the noindex meta tag.
So, the simple answer is that there are links out there which still point to your old pages. Just because links are not highlighted in OSE or even Google WMT, does not mean that they don't exist. WMT should provide you with the most accurate link information, but even that is not necessarily complete according to Google.
Don't forget that there may also be "links" out there in the form of bookmarks or favorites that people keep in their browsers. When clicked these will also generate a 404 response from your server.
If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page. If you do not know the URL's of the pages that have been removed the best way to stop them from returning 404's is to add a catch-all 301 redirect so that any request for a page that does not exist is redirected to a single page. Some people send all of this traffic to the home page, but my preference would be to send it to a custom designed 404 or a relevant category page.
Hope that helps,
Sha
-
When did you change over to the WP site?
Today is October 1st and the most recent 404 error shared in your image is from 9/27. If you have made the changes after 9/27, then no new errors have been found since you made the change.
Since the moz report shows no crawl errors, your current site is clean assuming your site navigation allowed your website to be fully crawled.
The Google errors can be from any website. The next step is to determine the source of the link causing the 404 error. Using the 2nd image you shared, click on each link in the left column of your WMT report. For example, http://www.mangotano.eu/ge/doc/tryit.php shows 3 pages. Click on it and you should see a list of those 3 pages so you can further troubleshoot.
-
I dont think they are, i thing they found them long ago, and no matterr if you block them, remove them of whatever, google take for ever to sort itsself out
-
Sorry Alan,
but I think that Google can looking for old page yet. This is the reason:I deleted old page form index by GMWT "remove url request"
I dissalowed old page by robots.txtThe problem is why Google find in NEW page links to OLD page.
-
The 404's are from pages that used to be linked in your old site correct?, if so I suggest that google is still looking for them. Unless you changed your domain name this would be the reason
-
Yes, link come from my page. Bu I created new page by Wordpress (and deleted OLD website). So, there are NO link beetwen OLD and NEW pages. How GWMT can find a connection? Webpage Source Code HTML doesn't show any link to those page.
-
From you own web page i would asume.
i would suuggest that even that they are not in index, google is till trying, and that WMT is a bit behind. i have simular for links that i took down moths ago.
-
Hi Alan,
404 not found pages are not indexed. My big problem is that I don't now where (and How) GWMT found source link (pages that link to not found page)
-
If they were in a SE index, they will try them for some time before removing from index., i would not worry
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 Errors flaring on nonexistent or unpublished pages – should we be concerned for SEO?
Hello! We keep getting "critical crawler" notifications on Moz because of firing 404 codes. We've checked each page and know that we are not linking to them anywhere on our site, they are not published and they are not indexed on Google. It's only happened since we migrated our blog to Hubspot so we think it has something to do with the test pages their developers had set up and that they are just lingering in our code somewhere. However, we are still concerned having these codes fire implies negative consequences for our SEO. Is this the case? Should we be concerned about these 404 codes despite the pages from those URLs not actually existing? Thank you!
Intermediate & Advanced SEO | | DebFF
Chloe0 -
Description tag in code is different from what is shown in SERPS...
Hi there: We have a client whose website we built in WP, using Yoast Pro as our SEO plugin. I was reading some reports (actually coming out of SEMrush but we use Moz as well) and I am getting really varying results in the description are of the SERPS. Even though I'm seeing the copy we wrote in Yoast in the description tag code, the SERP is showing an excerpt from the copywriting on the site. What's even weirder is that SEMrush is pulling an entirely DIFFERENT description. I'm obviously missing out on the finer points of description tags, as Google clearly does not always choose to feature what is actually written in the description tag itself. Can someone explain to me what might be going on here? Thanks in advance,
Intermediate & Advanced SEO | | Daaveey1 -
Site's pages has GA codes based on Tag Manager but in Screaming Frog, it is not recognized
Using Tag Assistant (Google Chrome add-on), we have found that the site's pages has GA codes. (also see screenshot 1) However, when we used Screaming Frog's filter feature -- Configuration > Custom > Search > Contain/Does Not Contain, (see screenshot 2) SF is displaying several URLs (maybe all) of the site under 'Does Not Contain' which means that in SF's crawl, the site's pages has no GA code. (see screenshot 3) What could be the problem why SF states that there is no GA code in the site's pages when in fact, there are codes based on Tag Assistant/Manager? Please give us steps/ways on how to fix this issue. Thanks! SgTovPf VQNOJMF RCtBibP
Intermediate & Advanced SEO | | jayoliverwright0 -
How do I prevent 404's from hurting my site?
I manage a real estate broker's site on which the individual MLS listing pages continually create 404 pages as properties are sold. So, on a site with 2200 pages indexed, roughly half are 404s at any given time. What can I do to mitigate any potential harm from this?
Intermediate & Advanced SEO | | kimmiedawn0 -
Best way to fix 404 crawl errors caused by Private blog posts in WordPress?
Going over Moz Crawl error report and WMT's Crawl errors for a new client site... I found 44 High Priority Crawl Errors = 404 Not Found I found that those 44 blog pages were set to Private Mode (WordPress theme), causing the 404 issue.
Intermediate & Advanced SEO | | SEOEND
I was reviewing the blog content for those 44 pages to see why those 2010 blog posts, were set to private mode. Well, I noticed that all those 44 blog posts were pretty much copied from other external blog posts. So i'm thinking previous agency placed those pages under private mode, to avoid getting hit for duplicate content issues. All other blog posts posted after 2011 looked like unique content, non scraped. So my question to all is: What is the best way to fix the issue caused by these 44 pages? A. Remove those 44 blog posts that used verbatim scraped content from other external blogs.
B. Update the content on each of those 44 blog posts, then set to Public mode, instead of Private.
C. ? (open to recommendations) I didn't find any external links pointing to any of those 44 blog pages, so I was considering in removing those blog posts. However not sure if that will affect site in anyway. Open to recommendations before making a decision...
Thanks0 -
Handful of internal pages penguin penalized. 302 them or let them 404?
We have a site that is for the most part doing great, but the internal pages that received too much link building received some penguin penalties (no warning in WMT) but it's fairly obvious. Has anyone tried letting these pages 404 and just creating new URL's? Or 302 redirecting the old URL's to new ones?
Intermediate & Advanced SEO | | iAnalyst.com0 -
redirect 404 pages to homepage
Hello, I'm puting a new website on a existing domain. In order to not loose the links that point to the varios old url I would like to redirect them to homepage. The old website was a mess as there was no seo and the pages didn't target any keywords. Thats why I would like to redirect all links to home. What do you think is the best way to do this ? I tried to ad this in the .htaccess but it's not working; ErrorDocument 404 /index.php Con you tell me how it exacly look? Now the hole file is like this: @package Joomla @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. @license GNU General Public License version 2 or later; see LICENSE.txt READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! The line just below this section: 'Options +FollowSymLinks' may cause problems with some server configurations. It is required for use of mod_rewrite, but may already be set by your server administrator in a way that dissallows changing it in your .htaccess file. If using it causes your server to error out, comment it out (add # to beginning of line), reload your site in your browser and test your sef url's. If they work, it has been set by your server administrator and you do not need it set here. Can be commented out if causes errors, see notes above. Options +FollowSymLinks Mod_rewrite in use. RewriteEngine On Begin - Rewrite rules to block out some common exploits. If you experience problems on your site block out the operations listed below This attempts to block the most common type of exploit attempts to Joomla! Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]([^)]) [OR] Block out any script that includes a
Intermediate & Advanced SEO | | igrizo0 -
Status Code: 404 Errors. How to fix them.
Hi, I have a question about the "4xx Staus Code" errors appearing in the Analysis Tool provided by SEOmoz. They are indicated as the worst errors for your site and must be fixed. I get this message from the good people at SEOmoz: "4xx status codes are shown when the client requests a page that cannot be accessed. This is usually the result of a bad or broken link." Ok, my question is the following. How do I fix them? Those pages are shown as "404" pages on my site...isn't that enough? How can fix the "4xx status code" errors indicated by SEOmoz? Thank you very much for your help. Sal
Intermediate & Advanced SEO | | salvyy0