Status Code 404: But why?
-
Google Web Master Tool reported me that I have several 404 staus code.,
First they were 2, after 4..6 and 10, right now. Every time I add a new page.
I've got a no CMS managed website. After old website was been deleted, I installed Wordpress, created new page and deleted and blocked (robots.txt) old page.
Infact all page not found don't exist!!! (Pic: Page not found).
The strange thing is that no pages link to those 404 pages (All Wordpress Created page are new!!!). Seomoz doesn't report me any 404 error (Pic 3)
I controlled all my pages:
- No "strange" link in any pages
- No link reported by Seomoz tool
Bu why GWMT reports me that one? How can I risolve that problem?
I'm going crazy!!!Regards
Antonio -
Antonio,
Ryan has explained this perfectly.
For a more detailed explanation of methods for controlling page indexing, you could read this post on Restricting Robot Access for Improved SEO
It seems from your comments and questions about 301 redirects, that there is some confusion on how they work and why we use them.
A 301 redirect is an instruction to the server which is most commonly done by adding a .htaccess file (if you are using an Apache server).
The .htaccess file is read by the server when it receives a request to serve any page on the site. The server reads each rule in the file and checks to see if the rule matches the existing situation. When a rule matches, the server carries out the action required. If no rule matches, then the server proceeds to serve the reqested page.
So, in Ryan's first example above, there would be a line of code in the .htaccess file that basically says to the server IF the page requested is /apples, send the request to /granny-smith-apples using a 301 (Permanent) Redirect.
The intent of using a 301 Redirect is to achieve two things:
- To prevent loss of traffic and offer the visitor an alternative landing page.
- To send a signal to Search Engines that the old page should be removed from the index and replaced with the new page.
The 301 Redirect is referred to as Permanent for this reason. Once the 301 Redirect is recognized and acted upon by the search engine, the page will be permanently removed from the index.
In contrast, the request to remove a page via Google WMT is a "moment in time" option. The page can possibly be re-indexed because it is accessible to crawlers via an external link from another site (unless you use the noindex meta tag instead of robots.txt). Then you would need to resubmit a removal request.
I hope this makes clearer the reasons for my response - basically, the methods you have used are not "closing the door" on the issue, but leaving the possibility open for it to occur again.
Sha
-
But I think, tell me I'm right, that robots.txt is better than noindex tag.
Definitely not. The opposite is true.
A no-index tag tells search engines not to index the page. The content will not be considered as duplicate anymore. But the search engines can still crawl the page and follow all the links. This allows your PR to flow naturally throughout your site. This also allows search engines to naturally read any changes in meta tags. A robots.txt disallow prevents the search engine from looking at any of the page's code. Think of it as a locked door. The crawler cannot read any meta tags and any PR from your site that flows to the page simply dies.
Do I need "real" page to create a 301 redirect?
No. Let's look at a redirect from both ends.
Example 1 - you delete the /apples page from your site. The /apples page no longer exists. After reviewing your site you decide the best replacement page would be the /granny-smith-apples page. Solution: a 301 redirect from the non-existent /apples page to the /granny-smith-apples page.
Example 2 - you delete the /apples page from your site. You no longer carry any form of apples but you do carry other fruit. After some thought you decide to redirect to the /fruit/ category page. Solution: a 301 redirect from the non-existent /apples page to the /fruit/ category page.
Example 3 - you delete the /apples page from your site but you no longer carry anything similar. You can decide to let the page 404. A 404 error is a natural part of the internet. Examine your 404 page to ensure it is helpful. Ideally it should contain your normal site navigation, a site search field and a friendly "sorry the page you are looking for is no longer available" message.
Since you asked about existence of redirected pages, you can actually redirect to a page that does not exist. You could perform a 301 from /apples to a non-existent /apples2 page. When this happens it is almost always due to user error by the person who added the redirect. When that happens anyone who tries to reach the /apples page will be redirected to the non-existent /apples2 page and therefore receive a 404 error.
-
Ryan,
what you say is right: The best robots.txt file is a blank one. But I think, tell me I'm right, that robots.txt is better than noindex tag.
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? Yes.If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist...
**Ok. But Do I need "real" page to create a 301 redirect?
I deleted those one.So, to resolve my problem must i redirect old page to most relevant page?**
-
Greenman,
I have a simple rule I learned over time. NEVER EVER EVER EVER use robots.txt unless there is absolutely no other method possible to achieve the required result. It is simply bad SEO and will cause problems. The best robots.txt file is a blank one.
When you use CMS software like WP, then it is required for some areas but it's use should be minimized.
How can I add a 301 redirect to a page that doesn't exit?
You have presented 404 errors. Those errors are links TO pages which don't exist, correct? If so, I believe Sha was recommending you can create a 301 redirect from the page which does not exist, to the most relevant page that does exist.
It's a bit of semantics but if you chose to do such, you can create 301s from or to pages that don't exist.
-
Greenman,
As I suspected many of the dates of the bad URLs are old, some even being from 2010. I took a look at your home page specifically checking for the URL you highlighted in red on the 4th image. It is not present.
My belief is your issue has been resolved by the changes you made. I recommend you continue to monitor WMT for any NEW errors. If you see any fresh dates with 404, that would be a concern which should be investigated. Otherwise the problem appears to be resolved.
I also very much support Sha's reply above.
-
Hi Sha, thanks for your answer.
1.** robots.txt is not the most reliable method of ensuring that pages are not indexed**
If you use tag noindex, spider will acces to your page but it will not get enough information. So, page will be semi-indexed.
My old pages ware been removed, no indexed (by robots) and I sent remove request to Google. No problem with that, no result on the SERP.
2. So, the simple answer is that there are links out there which still point to your old pages...does not mean that they don't exist.
You can see by screenshot the link's source: just my old "ghost" pages. No other sources.
3. If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page.
How can I add a 301 redirect to a page that doesn't exit?
Old page -> 301 -> New page (Home?). But Old page doesn't exit in Wordpress!!!**I don't want stop 404, I want remove link that bring to deleted pages. **
-
-
My gut feeling is that a catch al 301, is not a good thing, I cant give you any evidence, just a bit of reasoning and gut feeling.
I always try to put myself in the SE shoes, would i think a lot of 301's pointing to one not relavant page is natual? and would it be hard to detect? I would answer No and No. Although i used to do it to my home page a while ago, I guess i had a different gut feeling back then
-
Hi Greenman,
I would guess that your problem is most likely caused by the fact that you have used the robots.txt method to block the pages you removed.
robots.txt is not the most reliable method of ensuring that pages are not indexed. Even though robots.txt tells bots not to crawl a page, Google has openly stated that if a page is found through an external link from another site, they can be crawled and indexed.
The most effective way to block pages is to use the noindex meta tag.
So, the simple answer is that there are links out there which still point to your old pages. Just because links are not highlighted in OSE or even Google WMT, does not mean that they don't exist. WMT should provide you with the most accurate link information, but even that is not necessarily complete according to Google.
Don't forget that there may also be "links" out there in the form of bookmarks or favorites that people keep in their browsers. When clicked these will also generate a 404 response from your server.
If you know that you have removed pages you should add 301 redirects to send any traffic to another relevant page. If you do not know the URL's of the pages that have been removed the best way to stop them from returning 404's is to add a catch-all 301 redirect so that any request for a page that does not exist is redirected to a single page. Some people send all of this traffic to the home page, but my preference would be to send it to a custom designed 404 or a relevant category page.
Hope that helps,
Sha
-
When did you change over to the WP site?
Today is October 1st and the most recent 404 error shared in your image is from 9/27. If you have made the changes after 9/27, then no new errors have been found since you made the change.
Since the moz report shows no crawl errors, your current site is clean assuming your site navigation allowed your website to be fully crawled.
The Google errors can be from any website. The next step is to determine the source of the link causing the 404 error. Using the 2nd image you shared, click on each link in the left column of your WMT report. For example, http://www.mangotano.eu/ge/doc/tryit.php shows 3 pages. Click on it and you should see a list of those 3 pages so you can further troubleshoot.
-
I dont think they are, i thing they found them long ago, and no matterr if you block them, remove them of whatever, google take for ever to sort itsself out
-
Sorry Alan,
but I think that Google can looking for old page yet. This is the reason:I deleted old page form index by GMWT "remove url request"
I dissalowed old page by robots.txtThe problem is why Google find in NEW page links to OLD page.
-
The 404's are from pages that used to be linked in your old site correct?, if so I suggest that google is still looking for them. Unless you changed your domain name this would be the reason
-
Yes, link come from my page. Bu I created new page by Wordpress (and deleted OLD website). So, there are NO link beetwen OLD and NEW pages. How GWMT can find a connection? Webpage Source Code HTML doesn't show any link to those page.
-
From you own web page i would asume.
i would suuggest that even that they are not in index, google is till trying, and that WMT is a bit behind. i have simular for links that i took down moths ago.
-
Hi Alan,
404 not found pages are not indexed. My big problem is that I don't now where (and How) GWMT found source link (pages that link to not found page)
-
If they were in a SE index, they will try them for some time before removing from index., i would not worry
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is Content Location Determined by Source Code or Visual Location in Search Engine's Mind?
I have a page with 2 scroll features. First 1/3 of the page (from left) has thumb pictures (not original content) and a vertical scroll next to. Remaining 2/3 of the page has a lot of unique content and a vertical scroll next to it. Question: Visually on a computer, the unique content is right next to the thumbs, but in the source code the original content shows after these thumbs. Does that mean search engines will see this content as "below the fold" and actually, placing this content below the thumbs (requiring a lot of scrolling to get to the original content) would in a search engine's mind be the exact same location of the content, as the source code shows the same location? I am trying to understand if search engines base their analysis on source code or also visual location of content? thx
Intermediate & Advanced SEO | | khi50 -
Difference in Number of URLS in "Crawl, Sitemaps" & "Index Status" in Webmaster Tools, NORMAL?
Greetings MOZ Community: Webmaster Tools under "Index Status" shows 850 URLs indexed for our website (www.nyc-officespace-leader.com). The number of URLs indexed jumped by around 175 around June 10th, shortly after we launched a new version of our website. No new URLs were added to the site upgrade. Under Webmaster Tools under "Crawl, Site maps", it shows 637 pages submitted and 599 indexed. Prior to June 6th there was not a significant difference in the number of pages shown between the "Index Status" and "Crawl. Site Maps". Now there is a differential of 175. The 850 URLs in "Index Status" is equal to the number of URLs in the MOZ domain crawl report I ran yesterday. Since this differential developed, ranking has declined sharply. Perhaps I am hit by the new version of Panda, but Google indexing junk pages (if that is in fact happening) could have something to do with it. Is this differential between the number of URLs shown in "Index Status" and "Crawl, Sitemaps" normal? I am attaching Images of the two screens from Webmaster Tools as well as the MOZ crawl to illustrate what has occurred. My developer seems stumped by this. He has submitted a removal request for the 175 URLs to Google, but they remain in the index. Any suggestions? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
301 / 404 & Getting Rid of Keyword Pages
I had a feeling that my keyword focused pages were causing my site not to rank well. I do not have that many keywords. I have 2 main keyword phrases along with 6 city locations. For example (fake) "tea house tampa" "tea house clearwater" "tea house sarasota" and "tea room tampa" "tea room cleawater" "tea house sarasota". So, I don't feel that I need that many pages. I feel like I can optimize my home page and maybe 1 or 2 topic pages. Right now, I have a keyword for each of those phrases. These are all internal pages on 1 domain. Not multiple domains. Sooo... I tested it by 301ing a few of my "tea house" KW pages to the home page. And low and behold... my home page rose BIG TIME! Major improvement! I'm talking like 13th to 2nd! Here is my question... how should I proceed? My SEO has warned me against 301ing too many pages all pointing to the home page. He says that will negatively impact my ratings. Should I 404 some pages? Should I build a "tea room" topic page and 301 that set there? What is worse? 301 or 404? How many is too many? I'm really excited by these results, but I'm scare to move forward and hurt what has happened. Thanks in advance!
Intermediate & Advanced SEO | | CalicoKitty20000 -
Mystery 404's
I have a large number of 404's that all have a similar structure: www.kempruge.com/example/kemprugelaw. kemprugelaw keeps getting stuck on the end of url's. While I created www.kempruge.com/example/ I never created the www.kempruge.com/example/kemprugelaw page or edited permalinks to have kemprugelaw at the end of the url. Any idea how this happens? And what I can do to make it stop? Thanks, Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Why does Google show Titles different than the coded titles?
Hi, I've noticed that on many pages Google shows on the SERPS titles that he chose for me and not necessarily the ones coded in the Title tag (usually small difference like adding suffix etc.). Why is that? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
What is the impact of excessive code on rankings
I am working on optimizing a page that utilizes a template that generates an excessive amount of code - 4 to 5 times what I am seeing with competitor pages. The entire site seems to have this problem and we have, according to webmaster tools a a medium to slow load time. In general, given the content and teh strength of the domain, I think this page should be performing better. I think the entire site should be performing better. Would it be worth it to replace the template I am using and replace it with a more lightweight page design? I would obviously keep the content and the url the same. Typically I want to try everything I can to improve rankings, but this change would take some time, so I don;t want to rush into it. Looking for some answers based on experience, not general best practices. Thanks
Intermediate & Advanced SEO | | AmyLB0 -
HON Code Certification--how important is it for a health site?
How important is HON code for a health site? I am working on a client's site (health-related) that was hit by Panda and am wondering if anyone knows of any studies or has any personal experience with Panda & HON code?
Intermediate & Advanced SEO | | nicole.healthline0 -
301 vs. 404
If a listing on a website is no longer available to display is it better to resolve to a 301 redirect or use a 404? I know from an SEO point of view a 301 will pass on the link value, but is that as valuable as saying tto the user hey that page is no lonoger available try something else?
Intermediate & Advanced SEO | | AU-SEO0