Issue with 'Crawl Errors' in Webmaster Tools
-
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10.
Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected.
Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not.
Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?!
Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!!
Thanks.
-
Thanks both for your responses. It's a strange one and I can only assume that these pages remain in Google's index - I have checked many link sources and found that the links do not exist and therefore haven't done since the page was deleted. It seems ridicilous that you should have to 301 every page you delete, there are literally 500+ of these phantom links to non-existant URLs and the site is changing all the time.
I have opted to add a 'no index' meta to the 404s and also encourage them to delete from index by adding the pages to the robots.txt file.
Let's see if it works - I'll post on here when I know for sure so other people with the same question can see the outcome.
Thanks again, Damien and Steven.
-
Completely agree with Damien. If they don't exist but Webmaster Tools is showing them, 301 them, there has to be a link somewhere on the internet that is causing them to think there is. I would also go through the server logs to see if there is any additional information like a referring page to the non-existent ones.
-
Hey,
I guess if you've exhausted all other possibilities you can either let them return a 404 and leave them be which will most likely do you no harm or 301 that particular URL to another relevant page on your site.
Make sure they are actually returning a 404 first though via header response check.
DD
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google selecting incorrect URL as canonical: 'Duplicate, submitted URL not selected as canonical'
Hi there, A number of our URLs are being de-indexed by Google. When looking into this using Google Search Console the same message is appearing on multiple pages across our sites: 'Duplicate, submitted URL not selected as canonical' 'IndexingIndexing allowed? YesUser-declared canonical - https://www.mrisoftware.com/ie/products/real-estate-financial-software/Google-selected canonical - https://www.mrisoftware.com/uk/products/real-estate-financial-software/'Has anyone else experienced this problem?How can I get Google to select the correct, user-declared canoncial? Thanks.
Technical SEO | | nfrank0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
What was the Google 'update' on 31st March?
Hi all. I looked back and saw that there was an update shown in 'Search Analytics' in Webmaster Tools a few weeks before the Mobile algorithm update. Not been able to find any mention of it and what it did so thought I'd check in here. ps. Also, this is a 90 day stretch and shows that our rankings have taken a hit since the mobile algorithm update. Interesting stuff (see image below) 4rJMU9e.jpg?1
Technical SEO | | RobFD0 -
Is there a good Free tool that will check my entire subdomain for mobility issues?
I've been using the Google tool and going page by page, everything seems great. But I'd really like something that will crawl the entire subdomain and give me a report. Any suggestions?
Technical SEO | | absoauto0 -
Duplicate Page Content error but I can't see it
Hi All We're getting a lot of Duplicate Page Content errors but I can't match it up. For example this page: http://www.daytripfinder.co.uk/attractions/32-antique-cottage It is saying the on page properties as follows: Title DayTripFinder - Things to do reviewed by you - 7,000 attractions <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Meta Description</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Read Reviews, Browse Opening Hours and Prices. View Photos, Maps. 7,000 UK Visitor Attractions.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">But this isn't the page title or meta description.
Technical SEO | | KateWaite85
</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">And it's showing five (many others) example pages that share it. Again the page titles and description are different.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/mckinlay-theatre</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/bakers-dolphin</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/shipley-park-fishing</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/king-johns-lodge-and-gardens</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/city-hall
</dt> Any ideas? Not sure if I'm missing something here! Thanks!0 -
Should we redirect 404 errrors seen in webmaster tools with ... (dot.dot,dot) ?
Lately I have seen lots of 404 errors showing in webmaster tools that are not really links. Many of them from shammy pages. (I did not put them there) One of the most common types is ones that show the link ending in ... ( dot, dot, dot) The appearance of the link is being sent from pages like this http://www.the-pick.com/00_fahrenheit,2.html For example a link like this would show up in webmaster tools as a 404 error. http://www.ehow.com/how_2352088_easily-... Are these worth redirecting? So far I have redirected some of them and found that is was not helpful and possibly harmful. Anyone else had the same experience? Also getting lots of partial urls showing up from pages that reference my site but the url is cut off and the link is not active. Does Google really count these as links? Is redirecting a link from a spammy page acknowledging acceptance and could it count against you?
Technical SEO | | KentH0 -
Recent Webmaster Tools Glitch Impacting Site Quality?
The ramifications of this would not be specific to myself but to anyone with this type of content on their pages... Maybe someone can chime in here, but I'm not sure how much if at all site errors (for example 404 errors) as reported by Google Webmaster Tools are seen as a factor in site quality, which would impact SEO rankings. Any insight on that alone would be appreciated. I've noticed some fairly new weird stuff going on in the WMT 404 error reports. It seems as though their engine is finding objects within the source code of the page that are NOT links but look a URL, then trying to crawl them and reporting them as broken. I've seen a couple different of cases in my environment that seem to trigger this issue. The easiest one to explain are Google Analytic virtual pageview Javascript calls where for example you might send a virtual pageview back to GA for clicks on outbound links. So in the source code of your page you would have something like: onclick="<a class="attribute-value">_gaq.push(['_trackPageview', '/outboundclick/www.othersite.com']);</a> Although this is obviously not a crawl-able link, sure enough Webmaster Tools now would be reporting the following broken page with a 404: www.mysite.com/outboundclick/www.otherwite.com I've seen other such cases of thing that look like URLs but not actual links being pulled out of the page source and reported as broken links. Has anyone else noticed this? Do 404 instances (in this case false ones) reported by Webmaster Tools impact site quality rankings and SEO? Interesting issue here, I'm looking forward to hear some people's thoughts on this. Chris
Technical SEO | | cbubinas0 -
SEOMoz Crawling Errors
I recently implemented a blog using WordPress on our website. I didn't use WordPress as the CMS for the rest of our site just the blog portion. So as an example I installed Wordpress in http://www.mysite/blog/" not in the root. My error report in SEOMoz went from 0 to 22e. The Moz bot or crawler that SEOMoz uses is reporting a ton of 4xx errors to strang links that shouldn't exist anywhere on the site. Example: Good link - http://www.mysite/products.html Bad link reported by SEOMoz - http://www.mysite/blog/my-first-post/products.html I've also noticed that my page speed as become much slower as reported by Google. Does anybody know what could be happening here? I know that typically it's better to install WordPress in the root and use it to control the entire site but I was under the gun to get a blog out. Thanks
Technical SEO | | TRICORSystems0