Crawl errors for pages that no longer exist
-
Hey folks,
I've been working on a site recently where I took a bunch of old, outdated pages down. In the Google Search Console "Crawl Errors" section, I've started seeing a bunch of "Not Found" errors for those pages. That makes perfect sense.
The thing that I'm confused about is that the "Linked From" list only shows a sitemap that I ALSO took down. Alternatively, some of them list other old, removed pages in the "Linked From" list.
Is there a reason that Google is trying to inform me that pages/sitemaps that don't exist are somehow still linking to other pages that don't exist? And is this ultimately something I should be concerned about?
Thanks!
-
Thanks for the question, this can definitely be annoying for webmasters!
Unfortunately, bots can don't everything in parallel. They have to take steps...
Step 1. Take List #1 of links.
Step 2. Crawl those links and build List #2.
Step 3. Crawl List #3 and build List #4...Now, sometimes it doesn't follow that same order. Let's say that in Step 3 it finds a bunch of pages with unique content. Maybe the next time around, it goes and checks some of those links in Step 3 without first checking if they were still linked. Why start the crawl all the way from the beginning again when you have a big list of URLs?
But, this creates a problem. When some of those links it crawled in Step 3 aren't there any more, Google will tell you they aren't there and tell you how they originally found them (which happened to be from a page in List #1). But what if Google hasn't checked that link in List #1 recently? What if you just removed it too?
Well, for a little while, at least, you will end up with errors.
Now, here comes the real rub - how long will it take for Google to find and correct that message it left you in the crawl report? Days? Weeks? Months? Who knows. Your best bet is to mark them as fixed and force Google to keep rechecking. Eventually, they will figure it out.
TL;DR; it is a data freshness and reporting issue that isn't your fault and isn't worth your time.
-
No - Google is just showing how slow it is when updating data in Webmaster tools.
Don't worry - if you wait long enough they'll go away. You could also mark them as solved (do this only if you are sure that there are no links pointing to these pages - to check if your internal linking is ok Screaming Frog is great tool)
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved How to solve orphan pages on a job board
Working on a website that has a job board, and over 4000 active job ads. All of these ads are listed on a single "job board" page, and don’t obviously all load at the same time. They are not linked to from anywhere else, so all tools are listing all of these job ad pages as orphans. How much of a red flag are these orphan pages? Do sites like Indeed have this same issue? Their job ads are completely dynamic, how are these pages then indexed? We use Google’s Search API to handle any expired jobs, so they are not the issue. It’s the active, but orphaned pages we are looking to solve. The site is hosted on WordPress. What is the best way to solve this issue? Just create a job category page and link to each individual job ad from there? Any simpler and perhaps more obvious solutions? What does the website structure need to be like for the problem to be solved? Would appreciate any advice you can share!
Reporting & Analytics | | Michael_M2 -
Avg Page Load Time (sec) Comppared to site average - what does it mean?
Hi All, In google analytic In Site Speed -> Page Timings we have two columns a) Page Views & b) Avg Load Time (sec) compared to site average. Now in "b" column I am able to below % one in green and another in brown so what does it mean? Can anyone please explain me? Image attached Thanks! bNbBA
Reporting & Analytics | | amu1230 -
UTM source errors in google search console
Dear Friends, I need help with UTM source and UTM medium errors. There are 300 such errors on my site which is affecting the site i think, The URL appended at the end is utm_source=rss&utm_medium=rss&utm_campaign= How do i resolve this? Please help me with it.Thanks ccEpFDn.png ccEpFDn.png
Reporting & Analytics | | marketing910 -
No-indexed pages are still showing up as landing pages in Google Analytics
Hello, My website is a local job board. I de-indexed all of the job listing pages on my site (anything that starts with http://www.localwisejobs.com/job/). When I search site:localwisejobs.com/job/, nothing shows up. So I think that means the pages are not being indexed. When I look in Google Analytics at Acquisition > Search Engine Optimization > Landing Pages, none of the job listing pages show up. But when I look at Acquisition > Channels > Organic and then click Landing Page as the primary dimension, the /job pages show up in there. Why am I seeing this discrepency in Organic Landing pages? And why would the /job pages be showing up as landing pages even though they aren't indexed?
Reporting & Analytics | | mztobias0 -
On Google Analytics, Pages that were 301 redirected are still being crawled. What's the issue here?
URL that we redirected are being crawled on Google Analytics. Since they dont exist, they have high bounce rates. What can the issue be?
Reporting & Analytics | | prestigeluxuryrentals.com0 -
Has anybody else had unusual /feed crawl errors in GWT on normal url's?
I'm getting crawl error notifications in Google Webmaster tools for pages that do not exist on my sites?! Basically normal URL's with /feed on the end.. http://jobs-transport.co.uk/submit/feed/ http://jobs-transport.co.uk/login/feed Has any body else experienced this problem? I have no idea why this is happening. Simon
Reporting & Analytics | | simmo2350 -
Why is the page title sometimes missing in the API results?
Why is the page title sometimes missing in the API results? It exists on the page and in the "keyword difficulty tool", but missing in the API. It is never returned when upa= 1For example:stdClass Object(
Reporting & Analytics | | kirza
[fmrp] => 7.05087881216
[fmrr] => 5.05983022077E-6
[pda] => 91.758307504
[ueid] => 0
[ufq] => www.yellowpages.com/
[uid] => 0
[umrp] => 0
[umrr] => 0
[upa] => 1
[upl] => yellowpages.com/
[us] => 0
[ut] =>
[uu] =>
) {"fmrp":5.679443875178763,"fmrr":2.052551993215746e-07,"pda":70.05750830306422,"ueid":0,"ufq":"education-portal.com/","uid":0,"umrp":0,"umrr":0,"upa":1,"upl":"education-portal.com/","us":0,"ut":"","uu":""}How to get the titles for these pages?0 -
Duplicate page content
I have a website which "houses" five different and completely separate departments, so the content is separated by subfolders. e.g. domain.com/department1 domain.com/department2 etc. and each have their own individual top navigation menus. There is an "About Us" section for each department which has about 6 subpages (Work for us, What we do, Awards etc.) but the problem is that the content for each department is exactly the same. The only difference is the navigation menu and the breadcrumbs. This isn't ideal as a change to one page means having to make the change to all 5 and from an SEO perspective it's duplicate content x5 (apart from the Nav). One solution I can see is to have the "About Us" section moved to the root level (domain.com/about-us) and have a generic nav, possibly with the department names on it. The only problem with this is that it disrupts the user journey if they are forced away from the department that they're chosen. Basically i'm looking for suggestions or examples of other sites that have got around this problem, I need inspiration! Any help would be greatly appreciated.
Reporting & Analytics | | haydennz0