Status Code 404: But why?
-
Google Web Master Tool reported me that I have several 404 staus code.,
First they were 2, after 4..6 and 10, right now. Every time I add a new page.
I've got a no CMS managed website. After old website was been deleted, I installed Wordpress, created new page and deleted and blocked (robots.txt) old page.
Infact all page not found don't exist!!! (Pic: Page not found).
The strange thing is that no pages link to those 404 pages (All Wordpress Created page are new!!!). Seomoz doesn't report me any 404 error (Pic 3)
I controlled all my pages:
- No "strange" link in any pages
- No link reported by Seomoz tool
Bu why GWMT reports me that one? How can I risolve that problem?
I'm going crazy!!!Regards
Antonio -
Antonio,
Ryan has explained this perfectly.
For a more detailed explanation of methods for controlling page indexing, you could read this post on Restricting Robot Access for Improved SEO
It seems from your comments and questions about 301 redirects, that there is some confusion on how they work and why we use them.
A 301 redirect is an instruction to the server which is most commonly done by adding a .htaccess file (if you are using an Apache server).
The .htaccess file is read by the server when it receives a request to serve any page on the site. The server reads each rule in the file and checks to see if the rule matches the existing situation. When a rule matches, the server carries out the action required. If no rule matches, then the server proceeds to serve the reqested page.
So, in Ryan's first example above, there would be a line of code in the .htaccess file that basically says to the server IF the page requested is /apples, send the request to /granny-smith-apples using a 301 (Permanent) Redirect.
The intent of using a 301 Redirect is to achieve two things:
- To prevent loss of traffic and offer the visitor an alternative landing page.
- To send a signal to Search Engines that the old page should be removed from the index and replaced with the new page.
The 301 Redirect is referred to as Permanent for this reason. Once the 301 Redirect is recognized and acted upon by the search engine, the page will be permanently removed from the index.
In contrast, the request to remove a page via Google WMT is a "moment in time" option. The page can possibly be re-indexed because it is accessible to crawlers via an external link from another site (unless you use the noindex meta tag instead of robots.txt). Then you would need to resubmit a removal request.
I hope this makes clearer the reasons for my response - basically, the methods you have used are not "closing the door" on the issue, but leaving the possibility open for it to occur again.
Sha
-
But I think, tell me I'm right, that robots.txt is better than noindex tag.
Definitely not. The opposite is true.
A no-index tag tells search engines not to index the page. The content will not be considered as duplicate anymore. But the search engines can still crawl the page and follow all the links. This allows your PR to flow naturally throughout your site. This also allows search engines to naturally read any changes in meta tags. A robots.txt disallow prevents the search engine from looking at any of the page's code. Think of it as a locked door. The crawler cannot read any meta tags and any PR from your site that flows to the page simply dies.
Do I need "real" page to create a 301 redirect?
No. Let's look at a redirect from both ends.
Example 1 - you delete the /apples page from your site. The /apples page no longer exists. After reviewing your site you decide the best replacement page would be the /granny-smith-apples page. Solution: a 301 redirect from the non-existent /apples page to the /granny-smith-apples page.
Example 2 - you delete the /apples page from your site. You no longer carry any form of apples but you do carry other fruit. After some thought you decide to redirect to the /fruit/ category page. Solution: a 301 redirect from the non-existent /apples page to the /fruit/ category page.
Example 3 - you delete the /apples page from your site but you no longer carry anything similar. You can decide to let the page 404. A 404 error is a natural part of the internet. Examine your 404 page to ensure it is helpful. Ideally it should contain your normal site navigation, a site search field and a friendly "sorry the page you are looking for is no longer available" message.
Since you asked about existence of redirected pages, you can actually redirect to a page that does not exist. You could perform a 301 from /apples to a non-existent /apples2 page. When this happens it is almost always due to user error by the person who added the redirect. When that happens anyone who tries to reach the /apples page will be redirected to the non-existent /apples2 page and therefore receive a 404 error.
-
Ryan,
what you say is right: The best robots.txt file is a blank one. But I think, tell me I'm right, that robots.txt is better than noindex tag.
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? Yes.If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist...
**Ok. But Do I need "real" page to create a 301 redirect?
I deleted those one.So, to resolve my problem must i redirect old page to most relevant page?**
-
Greenman,
I have a simple rule I learned over time. NEVER EVER EVER EVER use robots.txt unless there is absolutely no other method possible to achieve the required result. It is simply bad SEO and will cause problems. The best robots.txt file is a blank one.
When you use CMS software like WP, then it is required for some areas but it's use should be minimized.
How can I add a 301 redirect to a page that doesn't exit?
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist, to the most relevant page that does exist.
It's a bit of semantics but if you chose to do such, you can create 301s from or to pages that don't exist.
-
Greenman,
As I suspected many of the dates of the bad URLs are old, some even being from 2010. I took a look at your home page specifically checking for the URL you highlighted in red on the 4th image. It is not present.
My belief is your issue has been resolved by the changes you made. I recommend you continue to monitor WMT for any NEW errors. If you see any fresh dates with 404, that would be a concern which should be investigated. Otherwise the problem appears to be resolved.
I also very much support Sha's reply above.
-
Hi Sha, thanks for your answer.
1.** robots.txt is not the most reliable method of ensuring that pages are not indexed**
If you use tag noindex, spider will acces to your page but it will not get enough information. So, page will be semi-indexed.
My old pages ware been removed, no indexed (by robots) and I sent remove request to Google. No problem with that, no result on the SERP.
2. So, the simple answer is that there are links out there which still point to your old pages...does not mean that they don't exist.
You can see by screenshot the link's source: just my old "ghost" pages. No other sources.
3. If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page.
How can I add a 301 redirect to a page that doesn't exit?
Old page -> 301 -> New page (Home?). But Old page doesn't exit in Wordpress!!!**I don't want stop 404, I want remove link that bring to deleted pages. **
-
-
My gut feeling is that a catch al 301, is not a good thing, I cant give you any evidence, just a bit of reasoning and gut feeling.
I always try to put myself in the SE shoes, would i think a lot of 301's pointing to one not relavant page is natual? and would it be hard to detect? I would answer No and No. Although i used to do it to my home page a while ago, I guess i had a different gut feeling back then
-
Hi Greenman,
I would guess that your problem is most likely caused by the fact that you have used the robots.txt method to block the pages you removed.
robots.txt is not the most reliable method of ensuring that pages are not indexed. Even though robots.txt tells bots not to crawl a page, Google has openly stated that if a page is found through an external link from another site, they can be crawled and indexed.
The most effective way to block pages is to use the noindex meta tag.
So, the simple answer is that there are links out there which still point to your old pages. Just because links are not highlighted in OSE or even Google WMT, does not mean that they don't exist. WMT should provide you with the most accurate link information, but even that is not necessarily complete according to Google.
Don't forget that there may also be "links" out there in the form of bookmarks or favorites that people keep in their browsers. When clicked these will also generate a 404 response from your server.
If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page. If you do not know the URL's of the pages that have been removed the best way to stop them from returning 404's is to add a catch-all 301 redirect so that any request for a page that does not exist is redirected to a single page. Some people send all of this traffic to the home page, but my preference would be to send it to a custom designed 404 or a relevant category page.
Hope that helps,
Sha
-
When did you change over to the WP site?
Today is October 1st and the most recent 404 error shared in your image is from 9/27. If you have made the changes after 9/27, then no new errors have been found since you made the change.
Since the moz report shows no crawl errors, your current site is clean assuming your site navigation allowed your website to be fully crawled.
The Google errors can be from any website. The next step is to determine the source of the link causing the 404 error. Using the 2nd image you shared, click on each link in the left column of your WMT report. For example, http://www.mangotano.eu/ge/doc/tryit.php shows 3 pages. Click on it and you should see a list of those 3 pages so you can further troubleshoot.
-
I dont think they are, i thing they found them long ago, and no matterr if you block them, remove them of whatever, google take for ever to sort itsself out
-
Sorry Alan,
but I think that Google can looking for old page yet. This is the reason:I deleted old page form index by GMWT "remove url request"
I dissalowed old page by robots.txtThe problem is why Google find in NEW page links to OLD page.
-
The 404's are from pages that used to be linked in your old site correct?, if so I suggest that google is still looking for them. Unless you changed your domain name this would be the reason
-
Yes, link come from my page. Bu I created new page by Wordpress (and deleted OLD website). So, there are NO link beetwen OLD and NEW pages. How GWMT can find a connection? Webpage Source Code HTML doesn't show any link to those page.
-
From you own web page i would asume.
i would suuggest that even that they are not in index, google is till trying, and that WMT is a bit behind. i have simular for links that i took down moths ago.
-
Hi Alan,
404 not found pages are not indexed. My big problem is that I don't now where (and How) GWMT found source link (pages that link to not found page)
-
If they were in a SE index, they will try them for some time before removing from index., i would not worry
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can have FAQ schema code in home page?
Hi, can we have FAQ schema code in Home page ? we wrote some question and answer in drop down box on home page, and also add the schema code script to the head tag of page, but it does not work !,( other pages of our site show the FAQ in the SERP correctly) I want to know this problem is because its on Home Page ? or because its in Drop down style ? its the url of our site: http://payamgostar.com/
Intermediate & Advanced SEO | | allenwebmaster0 -
How old is 404 data from Google Search Console?
I was wondering how old the 404 data from Google Search Console actually is? Does anyone know over what kind of timespan their site 404s data is compiled over? How long do the 404s tend to take to disappear from the Google Search Console, once they are fixed?
Intermediate & Advanced SEO | | McTaggart0 -
Category 404 Error in Wordpress | Help!!!
Hello Gurus hope everyone is having a fantastic day. Right so I've been pulling my hair with this 404 error on links as such: htttp://manvanlondon.co.uk/category/clients/removals/man-and-van-wandsworth This link appears in the category page clients and the /removals/man-and-van-wandsworth part is the link that should take the user on the Man and Van Wandsworth page in the/ from the footer. However this link and all other links in the footer on these category pages/posts appear to be broken ONLY on this category pages, thus creating 404 errors. And those pages(i.e man and van Wandsworth) are not even categorized. The website is www.manvanlondon.co.uk . We tried various things on Wordpress and nothing is working including non-indexing. Has anyone met this problem before? Is there a way to fix it? Thank you for your time, and hope my explanations make sens. Monica
Intermediate & Advanced SEO | | monicapopa0 -
Site migration - 301 or 404 for pages no longer needed?
Hi I am migrating from my old website to a new one on a different, server with a very different domain and url structure. I know it's is best to change as little as possible but I just wasn't able to do that. Many of my pages can be redirected to new urls with similar or the same content. My old site has around 400 pages. Many of these pages/urls are no longer required on the new site - should I 404 these pages or 301 them to the homepage? I have looked through a lot of info online to work this out but cant seem to find a definative answer. Thanks for this!! James
Intermediate & Advanced SEO | | Curran0 -
Webmaster or analytics can we find pages that are 404
Hi, Webmaster or analytics can we find pages that are 404 404 landing pages? From which source to what page and from page i get the 404 error when someone accesses my webpage? Need to know which pages are live on sites that are broken in my site Thanks
Intermediate & Advanced SEO | | mtthompsons0 -
I had most of my sites down for a month for technical problems, how do I recover my SEO status ?
I had most of my sites down for a month for technical problems, how do I recover my SEO status ? I did everything possible to not get offline, but I did, some months before my domais were extremely slow, leading to failures over failures. I got them down and moved to another host. What should I do in SEO know that the mess is done ?
Intermediate & Advanced SEO | | aamato0 -
What is the impact of excessive code on rankings
I am working on optimizing a page that utilizes a template that generates an excessive amount of code - 4 to 5 times what I am seeing with competitor pages. The entire site seems to have this problem and we have, according to webmaster tools a a medium to slow load time. In general, given the content and teh strength of the domain, I think this page should be performing better. I think the entire site should be performing better. Would it be worth it to replace the template I am using and replace it with a more lightweight page design? I would obviously keep the content and the url the same. Typically I want to try everything I can to improve rankings, but this change would take some time, so I don;t want to rush into it. Looking for some answers based on experience, not general best practices. Thanks
Intermediate & Advanced SEO | | AmyLB0 -
SEO Correlation Between Code and Search Engine Rankings
I posted this on my blog and wanted to get everyones opinion on this (http://palatnikfactor.com/2011/06/07/seo-correlation-between-code-and-search-engine-rankings/) I’m always looking to see what top ranking websites may be doing to get the rankings they do. One of the tasks of any SEO I guess is to really analyze competitors, right? I want to really stress that what I am writing here is completely opinion based and have not (due to time) validated this correlation enough but would like to get the discussion started. Nevertheless, I did enough research to see that there may be a correlation between code validation and top ranking websites, at least for certain queries where the number of real big players/brands is limited or non-existent. So, what do I mean? http://validator.w3.org/ validates code on websites. This tool shows you errors and warnings that may be making it harder for search engines to crawl your website. Looking at top competitors for certain niches, I was surprised to find that top sites had very few errors compared to 2+ page rankings. That’s not to say that all the sites on the first page had fewer errors (cleaner code) than websites in the 2<sup>nd</sup> page plus. However, again, top ranking websites for keywords that I was looking at had cleaner code which may have a correlation in regards to organic rankings. What’s your take? Does this have any effect in regards to SEO?
Intermediate & Advanced SEO | | PaulDylan0