404's in WMT are old pages and referrer links no longer linking to them.
-
Within the last 6 days, Google Webmaster Tools has shown a jump in 404's - around 7000. The 404 pages are from our old browse from an old platform, we no longer use them or link to them.
I don't know how Google is finding these pages, when I check the referrer links, they are either 404's themselves or the page exists but the link to the 404 in question is not on the page or in the source code. The sitemap is also often referenced as a referrer but these links are definitely not in our sitemap and haven't been for some time. So it looks to me like the referrer data is outdated. Is that possible?
But somehow these pages are still being found, any ideas on how I can diagnose the problem and find out how google is finding them?
-
How long ago did you switch platforms? It can take months for Google to come back around to a page that linked to your site. Page on your site will stay in the cache until a few passes.
When you switch, did you do any 301 redirects? Examine the back links to your domain - any that come from good pages should be redirected to the new URL. If not, they will be scooped up by active SEOs. (finding 404 links is a popular link building technique).
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93633
If you know the links will be dead forever, try using a 410 response as it is supposed to make search engines drop the page faster.
http://www.seroundtable.com/404-410-google-15225.html (bottom)
Have you requested Google remove old directories/pages? If the content is gone and has no back links, try a removal request.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663427
-
Having a similar problem with a new site that was created by copying an old site in its entirety. Went through the trouble of cleaning everything up, having pages that were no longer relevant removed, fixed the sitemaps, etc. and now months later WMT showed me a spike of 404s for the old pages with the referrers as the XML sitemap and sitemap page... but they are definitely not be linked from there. I'm assuming there was some sort of hiccup with Google using an older, cached version of the sitemap to find these links.
I wound up just clearing the errors out of WMT and waiting to see if it will recrawl the error pages again. If Google continues to crawl them even though they aren't being linked to, then our next course of action was going to be 301ing them all just in case.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old Content Pages
Hello we run a large sports website. Since 2009 we have been doing game previews for most games every day for all the major sports..IE NFL, CFB, NBA, MLB etc.. Most of these previews generate traffic for 1-2 days leading up to or day of the event. After that there is minimal if any traffic and over the years almost nothing to the old previews. If you do a search for any of these each time the same matchup happens Google will update its rankings and filter out any old matchups/previews with new ones. So our question is what would you do with all this old content? Is it worth just keeping? Google Indexes a majority of it? Should we prune some of the old articles? The other option we thought of and its not really practical is to create event pages where we reuse a post each time the teams meet but if there was some sort of benefit we could do it.
Technical SEO | | dueces0 -
What's Moz's Strategy behind their blog main categories?
I've only just noticed that the Moz' blog categories have been moved within a pull down menu. See it underneath : 'Explore Posts by Category' on any blog page. This means that the whole list of categories under that pull-down is not crawlable by bots, and therefore no link-juice flows down onto those category pages. I imagine that the main drive behind that move is to sculpt page rank so that the business/money pages or areas of the website get greater link equity as opposed to just wasting it all throwing it down to the many categories ? it'd be good to hear about more from Rand or anyone in his team as to how they came onto engineering this and why. One of the things I wonder is: with the sheer amount of content that Moz produces, is it possible to contemplate an effective technical architecture such as that? I know they do a great job at interlinking content from one post onto another, so effectively one can argue that that kind of supersedes the need for hierarchical page rank distribution via categories... but I wonder : "is it working better this way vs having crawlable blog category links on the blog section? have they performed tests" some insights or further info on this from Moz would be very welcome. thanks in advance
Technical SEO | | carralon
David0 -
"One Page With Two Links To Same Page; We Counted The First Link" Is this true?
I read this to day http://searchengineland.com/googles-matt-cutts-one-page-two-links-page-counted-first-link-192718 I thought to myself, yep, thats what I been reading in Moz for years ( pitty Matt could not confirm that still the case for 2014) But reading though the comments Michael Martinez of http://www.seo-theory.com/ pointed out that Mat says "...the last time I checked, was 2009, and back then -- uh, we might, for example, only have selected one of the links from a given page."
Technical SEO | | PaddyDisplays
Which would imply that is does not not mean it always the first link. Michael goes on to say "Back in 2008 when Rand WRONGLY claimed that Google was only counting the first link (I shared results of a test where it passed anchor text from TWO links on the same page)" then goes on to say " In practice the search engine sometimes skipped over links and took anchor text from a second or third link down the page." For me this is significant. I know people that have had "SEO experts" recommend that they should have a blog attached to there e-commence site and post blog posts (with no real interest for readers) with anchor text links to you landing pages. I thought that posting blog post just for anchor text link was a waste of time if you are already linking to the landing page with in a main navigation as google would see that link first. But if Michael is correct then these type of blog posts anchor text link blog posts would have value But who is' right Rand or Michael?0 -
Miss meta description on 404 page
Hi, My 404 page did not have meta description. Is it an error? Because I run report and seomoz said that a problem. Thanks!
Technical SEO | | JohnHuynh0 -
Does rel= canonical combine link juice for 2 pages?
If two pages are very similar, and one should rel= canonical to the other, will the page authority pass from the page with rel= canonical to the target page? Also, what happens when you a page rel=canonical's to itself?
Technical SEO | | SkinLaboratory0 -
Unnatural Link Warning No Longer Showing in GWT?
Hi, We recently took on a new client that had been hit by the recent Google updates. After having a really good look at their analytics and their link profile it looked like they had been hit with over-optimisation of anchor text. Over the last month or so we have been working to remove a pile of links that contain their main keyword starting with the easiest to remove and the lowest quality. At the same time we have been building links using sematic keywords and junk anchor text in a bid to dilute the ration of main anchor text within their profile. We have a timetable of tasks drawn-up which we are working through, at the end of the timetable when all tasks were complete we planned to write a very nice reconsideration request to Mr Google. I have logged in to Google Webmaster Tools this morning and I have noticed that the 'Unnatural Links' notice has been removed from that domain. Does anyone know if this signifies anything? We haven't sent a reconsideration request to google yet. Thanks.
Technical SEO | | AdeLewis
Ade.0 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0 -
Optimum Number of Links on Any Given Page
One of the guidelines you provide stipulates: "You should avoid having too many (roughly defined as more than 100) hyperlinks on any given page. When search engine spiders crawl the Internet they are limited by technology resources and are only able to crawl a certain number of links per webpage. In addition, search engine algorithms divide the value of some popularity metrics by the amount of links on a given page. This means that each of the pages being linked to from a given page are also affected by the number of links on the linking page. For these reasons, we recommend you include less than 100 links per page to ensure that they are all crawled, though if your pages have a high page authority, search engines will usually follow more links." As far as these 100 links are concerned, is this in reference to ALL links including outbound, internal, etc? Or is this referring to only outbound links to other sites?
Technical SEO | | johncmmc0