Google Search Console Crawl Errors?
-
We are using Google Search Console to monitor Crawl Errors. It seems Google is listing errors that are not actual errors. For instance, it shows this as "Not found":
https://tapgoods.com/products/tapgoods__8_ft_plastic_tables_11_available
So the page does not exist, but we cannot find any pages linking to it. It has a tab that shows Linked From, but if I look at the source of those pages, the link is not there. In this case, it is showing the front page (listed twice, both for http and https). Also, one of the pages it shows as linking to the non-existant page above is a non-existant page.
We marked all the errors as fixed last week and then this week they came up again. 2/3 are the same pages we marked as fixed last week.
Is this an issue with Google Search Console? Are we getting penalized for a non existant issue?
-
Agreed with Chris, when you have a lot of pages and when your code is a little bit more complex then some basic stuff Google Search Console will have a habit of sending. What I saw in the past as well is that they pick up parts of your tracking code and try to find URL structures within the code that don't really exist but are part of it.
Nothing to really worry about, if you make sure you run a monthly or quarterly crawl to check upon weird URL structures on your site and these URLs don't pop-up there you should be fine. As mentioned, just mark them as fixed so the real issues will move up again.
-
Hello,
You are not being penalized for these crawl errors, but it's important to monitor these. Continue to mark them as fixed and double check to make sure there are none that are broken. Many people have encountered the same issue you are mentioning, it seems to be inaccuracies within Google. Another option is to 301 these 'fake' URLs, however this may be time consuming for you. Also I would double check your sitemap, and make sure the links are not in the sitemap.
Hope this helps.
Chris
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get product info into Google Search Result box
Hi, in the last couple of weeks I get more and more search results with a product and prices of retailers below (see sample attached). Are there Schema parameters one could use to have a bigger chance to appear there? Thanks in advance Dieter Lang 0EYJtRJ
Intermediate & Advanced SEO | | Storesco1 -
Google and JavaScript
Hey there! Recent announcements at Google to encourage webmasters to let Google crawl Java Script http://www.googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html http://googlewebmastercentral.blogspot.com/2014/05/rendering-pages-with-fetch-as-google.html We have always put JS and CSS behind robots.txt, but now considering taking them out of robots. Any opinions on this?
Intermediate & Advanced SEO | | CleverPhD0 -
Miniclip has a search box showing in Google SERP: how?
For their brand keyword search - miniclip - Google SERP includes a search box reading "Search miniclip.com". Any one has an idea how this can be done?
Intermediate & Advanced SEO | | vivekg0 -
Google Places Listing Active In Two Seperate Google Places Accounts?
Hi is there any issues with having a google places listing in two seperate google places accounts. For example we have a client who cannot access their old google places account (ex-employee had their login details which they can't get) and want us to take control over the listing. If we click the "is this your listing" manage this page button - and claim the listing, will this transfer the listing to our control? Or will it create a duplicate? Are there any problems having the listing in different separate accounts. Is it a situation in which the last person who manages the listing takes control? And the listing automatically deactivates from the old account? Do all the images remain aswell? Thanks,
Intermediate & Advanced SEO | | MBASydney
Tom0 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Domain Favoured by Google
Hi there, We have just launched our website in Ireland .ie and was wondering would the .ie website be favoured by Google over a competitor with a .co.uk or .com domain? Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Best way to block a search engine from crawling a link?
If we have one page on our site that is is only linked to by one other page, what is the best way to block crawler access to that page? I know we could set the link to "nofollow" and that would prevent the crawler from passing any authority, and we can set the page to "noindex" to prevent it from appearing in search results, but what is the best way to prevent the crawler from accessing that one link?
Intermediate & Advanced SEO | | nicole.healthline0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0