Status Code 404: But why?
-
Google Web Master Tool reported me that I have several 404 staus code.,
First they were 2, after 4..6 and 10, right now. Every time I add a new page.
I've got a no CMS managed website. After old website was been deleted, I installed Wordpress, created new page and deleted and blocked (robots.txt) old page.
Infact all page not found don't exist!!! (Pic: Page not found).
The strange thing is that no pages link to those 404 pages (All Wordpress Created page are new!!!). Seomoz doesn't report me any 404 error (Pic 3)
I controlled all my pages:
- No "strange" link in any pages
- No link reported by Seomoz tool
Bu why GWMT reports me that one? How can I risolve that problem?
I'm going crazy!!!Regards
Antonio -
Antonio,
Ryan has explained this perfectly.
For a more detailed explanation of methods for controlling page indexing, you could read this post on Restricting Robot Access for Improved SEO
It seems from your comments and questions about 301 redirects, that there is some confusion on how they work and why we use them.
A 301 redirect is an instruction to the server which is most commonly done by adding a .htaccess file (if you are using an Apache server).
The .htaccess file is read by the server when it receives a request to serve any page on the site. The server reads each rule in the file and checks to see if the rule matches the existing situation. When a rule matches, the server carries out the action required. If no rule matches, then the server proceeds to serve the reqested page.
So, in Ryan's first example above, there would be a line of code in the .htaccess file that basically says to the server IF the page requested is /apples, send the request to /granny-smith-apples using a 301 (Permanent) Redirect.
The intent of using a 301 Redirect is to achieve two things:
- To prevent loss of traffic and offer the visitor an alternative landing page.
- To send a signal to Search Engines that the old page should be removed from the index and replaced with the new page.
The 301 Redirect is referred to as Permanent for this reason. Once the 301 Redirect is recognized and acted upon by the search engine, the page will be permanently removed from the index.
In contrast, the request to remove a page via Google WMT is a "moment in time" option. The page can possibly be re-indexed because it is accessible to crawlers via an external link from another site (unless you use the noindex meta tag instead of robots.txt). Then you would need to resubmit a removal request.
I hope this makes clearer the reasons for my response - basically, the methods you have used are not "closing the door" on the issue, but leaving the possibility open for it to occur again.
Sha
-
But I think, tell me I'm right, that robots.txt is better than noindex tag.
Definitely not. The opposite is true.
A no-index tag tells search engines not to index the page. The content will not be considered as duplicate anymore. But the search engines can still crawl the page and follow all the links. This allows your PR to flow naturally throughout your site. This also allows search engines to naturally read any changes in meta tags. A robots.txt disallow prevents the search engine from looking at any of the page's code. Think of it as a locked door. The crawler cannot read any meta tags and any PR from your site that flows to the page simply dies.
Do I need "real" page to create a 301 redirect?
No. Let's look at a redirect from both ends.
Example 1 - you delete the /apples page from your site. The /apples page no longer exists. After reviewing your site you decide the best replacement page would be the /granny-smith-apples page. Solution: a 301 redirect from the non-existent /apples page to the /granny-smith-apples page.
Example 2 - you delete the /apples page from your site. You no longer carry any form of apples but you do carry other fruit. After some thought you decide to redirect to the /fruit/ category page. Solution: a 301 redirect from the non-existent /apples page to the /fruit/ category page.
Example 3 - you delete the /apples page from your site but you no longer carry anything similar. You can decide to let the page 404. A 404 error is a natural part of the internet. Examine your 404 page to ensure it is helpful. Ideally it should contain your normal site navigation, a site search field and a friendly "sorry the page you are looking for is no longer available" message.
Since you asked about existence of redirected pages, you can actually redirect to a page that does not exist. You could perform a 301 from /apples to a non-existent /apples2 page. When this happens it is almost always due to user error by the person who added the redirect. When that happens anyone who tries to reach the /apples page will be redirected to the non-existent /apples2 page and therefore receive a 404 error.
-
Ryan,
what you say is right: The best robots.txt file is a blank one. But I think, tell me I'm right, that robots.txt is better than noindex tag.
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? Yes.If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist...
**Ok. But Do I need "real" page to create a 301 redirect?
I deleted those one.So, to resolve my problem must i redirect old page to most relevant page?**
-
Greenman,
I have a simple rule I learned over time. NEVER EVER EVER EVER use robots.txt unless there is absolutely no other method possible to achieve the required result. It is simply bad SEO and will cause problems. The best robots.txt file is a blank one.
When you use CMS software like WP, then it is required for some areas but it's use should be minimized.
How can I add a 301 redirect to a page that doesn't exit?
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist, to the most relevant page that does exist.
It's a bit of semantics but if you chose to do such, you can create 301s from or to pages that don't exist.
-
Greenman,
As I suspected many of the dates of the bad URLs are old, some even being from 2010. I took a look at your home page specifically checking for the URL you highlighted in red on the 4th image. It is not present.
My belief is your issue has been resolved by the changes you made. I recommend you continue to monitor WMT for any NEW errors. If you see any fresh dates with 404, that would be a concern which should be investigated. Otherwise the problem appears to be resolved.
I also very much support Sha's reply above.
-
Hi Sha, thanks for your answer.
1.** robots.txt is not the most reliable method of ensuring that pages are not indexed**
If you use tag noindex, spider will acces to your page but it will not get enough information. So, page will be semi-indexed.
My old pages ware been removed, no indexed (by robots) and I sent remove request to Google. No problem with that, no result on the SERP.
2. So, the simple answer is that there are links out there which still point to your old pages...does not mean that they don't exist.
You can see by screenshot the link's source: just my old "ghost" pages. No other sources.
3. If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page.
How can I add a 301 redirect to a page that doesn't exit?
Old page -> 301 -> New page (Home?). But Old page doesn't exit in Wordpress!!!**I don't want stop 404, I want remove link that bring to deleted pages. **
-
-
My gut feeling is that a catch al 301, is not a good thing, I cant give you any evidence, just a bit of reasoning and gut feeling.
I always try to put myself in the SE shoes, would i think a lot of 301's pointing to one not relavant page is natual? and would it be hard to detect? I would answer No and No. Although i used to do it to my home page a while ago, I guess i had a different gut feeling back then
-
Hi Greenman,
I would guess that your problem is most likely caused by the fact that you have used the robots.txt method to block the pages you removed.
robots.txt is not the most reliable method of ensuring that pages are not indexed. Even though robots.txt tells bots not to crawl a page, Google has openly stated that if a page is found through an external link from another site, they can be crawled and indexed.
The most effective way to block pages is to use the noindex meta tag.
So, the simple answer is that there are links out there which still point to your old pages. Just because links are not highlighted in OSE or even Google WMT, does not mean that they don't exist. WMT should provide you with the most accurate link information, but even that is not necessarily complete according to Google.
Don't forget that there may also be "links" out there in the form of bookmarks or favorites that people keep in their browsers. When clicked these will also generate a 404 response from your server.
If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page. If you do not know the URL's of the pages that have been removed the best way to stop them from returning 404's is to add a catch-all 301 redirect so that any request for a page that does not exist is redirected to a single page. Some people send all of this traffic to the home page, but my preference would be to send it to a custom designed 404 or a relevant category page.
Hope that helps,
Sha
-
When did you change over to the WP site?
Today is October 1st and the most recent 404 error shared in your image is from 9/27. If you have made the changes after 9/27, then no new errors have been found since you made the change.
Since the moz report shows no crawl errors, your current site is clean assuming your site navigation allowed your website to be fully crawled.
The Google errors can be from any website. The next step is to determine the source of the link causing the 404 error. Using the 2nd image you shared, click on each link in the left column of your WMT report. For example, http://www.mangotano.eu/ge/doc/tryit.php shows 3 pages. Click on it and you should see a list of those 3 pages so you can further troubleshoot.
-
I dont think they are, i thing they found them long ago, and no matterr if you block them, remove them of whatever, google take for ever to sort itsself out
-
Sorry Alan,
but I think that Google can looking for old page yet. This is the reason:I deleted old page form index by GMWT "remove url request"
I dissalowed old page by robots.txtThe problem is why Google find in NEW page links to OLD page.
-
The 404's are from pages that used to be linked in your old site correct?, if so I suggest that google is still looking for them. Unless you changed your domain name this would be the reason
-
Yes, link come from my page. Bu I created new page by Wordpress (and deleted OLD website). So, there are NO link beetwen OLD and NEW pages. How GWMT can find a connection? Webpage Source Code HTML doesn't show any link to those page.
-
From you own web page i would asume.
i would suuggest that even that they are not in index, google is till trying, and that WMT is a bit behind. i have simular for links that i took down moths ago.
-
Hi Alan,
404 not found pages are not indexed. My big problem is that I don't now where (and How) GWMT found source link (pages that link to not found page)
-
If they were in a SE index, they will try them for some time before removing from index., i would not worry
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Productontology URLs are 404 erroring, are there alternatives to denote new schema categories?
Our team QA specialist recently noticing that the class identifier URLs via productontology are 404ing out saying that the "There is no Wikipedia article for (particular property)". They are even 404ing for productontology URLs that are examples on the productontology.com website! Example: http://www.productontology.org/id/Apple The 404 page says that the wiki entry for "Apple" doesn't exist (lol) Does anybody know what is going on with this website? This service was extremely helpful for creating additionalType categories for schema categories that don't exist on schema.org. Are there any alternatives to productontology now that these class identifier URLs are 404ing? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Of the two examples of markup (microdata, schema) code below, which of the two is better designed for its purpose of Q&A, and what might be suggested to improve upon these lines of code (context: questions and answers within article content.
ANSWER SEEN 'WITHIN THE QUESTION' BRACKET So you ask, why is the sky blue?
Intermediate & Advanced SEO | | RedFrog
Well, the answer is not so simple; In the day-time, when it's clear and cloudless,
the sky is blue because molecules in the air scatter blue light from the sun more than they scatter red light.
When we look towards the sun at sunset, we see red and orange colours because the blue light has been scattered out and away from the line of sight. See Structured Data Testing Results 'QUESTION' AND 'ANSWER' IN 2 SEPARATE BRACKETS Why Is The Sky Blue? Well, the answer is not so simple; In the day-time, when it's clear and cloudless,
the sky is blue because molecules in the air scatter blue light from the sun more than they scatter red light.
When we look towards the sun at sunset, we see red and orange colours because the blue light has been scattered out and away from the line of sight. See Structured Data Testing Results Thanks, Mark0 -
Is Content Location Determined by Source Code or Visual Location in Search Engine's Mind?
I have a page with 2 scroll features. First 1/3 of the page (from left) has thumb pictures (not original content) and a vertical scroll next to. Remaining 2/3 of the page has a lot of unique content and a vertical scroll next to it. Question: Visually on a computer, the unique content is right next to the thumbs, but in the source code the original content shows after these thumbs. Does that mean search engines will see this content as "below the fold" and actually, placing this content below the thumbs (requiring a lot of scrolling to get to the original content) would in a search engine's mind be the exact same location of the content, as the source code shows the same location? I am trying to understand if search engines base their analysis on source code or also visual location of content? thx
Intermediate & Advanced SEO | | khi50 -
Do search engines crawl links on 404 pages?
I'm currently in the process of redesigning my site's 404 page. I know there's all sorts of best practices from UX standpoint but what about search engines? Since these pages are roadblocks in the crawl process, I was wondering if there's a way to help the search engine continue its crawl. Does putting links to "recent posts" or something along those lines allow the bot to continue on its way or does the crawl stop at that point because the 404 HTTP status code is thrown in the header response?
Intermediate & Advanced SEO | | brad-causes0 -
How do I best deal with pages returning 404 errors as they contain links from other sites?
I have over 750 URL's returning 404 errors. The majority of these pages have back links from sites, however the credibility of these pages from what I can see is somewhat dubious, mainly forums and sites with low DA & PA. It has been suggested placing 301 redirects from these pages, a nice easy solution, however I am concerned that we could do more harm than good to our sites credibility and link building strategy going into 2013. I don't want to redirect these pages if its going to cause a panda/penguin problem. Could I request manual removal or something of this nature? Thoughts appreciated.
Intermediate & Advanced SEO | | Towelsrus0 -
Does anyone have a BOTW.org promo code for november yet?
Does anyone have a best of the web directory promo code for november yet?
Intermediate & Advanced SEO | | unitedfitness0 -
redirect 404 pages to homepage
Hello, I'm puting a new website on a existing domain. In order to not loose the links that point to the varios old url I would like to redirect them to homepage. The old website was a mess as there was no seo and the pages didn't target any keywords. Thats why I would like to redirect all links to home. What do you think is the best way to do this ? I tried to ad this in the .htaccess but it's not working; ErrorDocument 404 /index.php Con you tell me how it exacly look? Now the hole file is like this: @package Joomla @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. @license GNU General Public License version 2 or later; see LICENSE.txt READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! The line just below this section: 'Options +FollowSymLinks' may cause problems with some server configurations. It is required for use of mod_rewrite, but may already be set by your server administrator in a way that dissallows changing it in your .htaccess file. If using it causes your server to error out, comment it out (add # to beginning of line), reload your site in your browser and test your sef url's. If they work, it has been set by your server administrator and you do not need it set here. Can be commented out if causes errors, see notes above. Options +FollowSymLinks Mod_rewrite in use. RewriteEngine On Begin - Rewrite rules to block out some common exploits. If you experience problems on your site block out the operations listed below This attempts to block the most common type of exploit attempts to Joomla! Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]([^)]) [OR] Block out any script that includes a
Intermediate & Advanced SEO | | igrizo0 -
Can 404 Errors Be Affecting Rankings
I have a client that we recently (3 months ago) designed, developed, and launch a new site at a "new" domain. We set up redirects from the old domain to the new domain and kept an eye on Google Webmaster Tools to make sure the redirects were working properly. Everything was going great, we maintained and improved the rankings for the first 2 months or so. In late January, I started noticing a great deal of 404 errors in Webmaster Tools for URLs from the new site. None of these URLs were actually on the current site so I asked my client if he had previously used to domain. It just so happens that he used the domain a while back and none of the URLs were ever redirected or removed from the index. I've been setting up redirects for all of the 404s appearing in Webmaster tools but we took a pretty decent hit in rankings for February. Could those errors (72 in total) been partially if not completely responsible for the hit in rankings? All other factors have been constant so that lead me to believe these errors were the culprits.
Intermediate & Advanced SEO | | TheOceanAgency0