Crawl errors in GWT!
-
I have been seeing a large number of access denied and not found crawl errors. I have since fixed the issued causing these errors; however, I am still seeing the in webmaster tools.
At first I thought the data was outdated, but the data is tracked on a daily basis!
Does anyone have experience with this? Does GWT really re-crawl all those pages/links everyday to see if the errors still exist?
Thanks in advance for any help/advice.
-
Neither access denied nor not found crawl errors are dealbreakers as far as Google is concerned. A not found error usually just means you have links pointing to pages that don't exist (this is how you can be receiving more errors than pages crawled - a not found error means that a link to that page was crawled, but since there's no page there, no page was crawled). Access denied is usually caused by either requiring a login or blocking the search bots with robots.txt.
If the links causing 404 errors aren't on your site it's certainly possible that errors would still be appearing. One thing you can do is double-check your 404 page to make sure it really is returning an error of 404: not found at the URL level. One common thing I've seen all over the place is that sites will institute a 302 redirect to one 404 page (like www.example.com/notfound). Because the actual URL isn't returning a 404, bots will sometimes just keep crawling those links over and over again.
Google doesn't necessarily crawl everything every day or update everything every day. If your traffic isn't being affected by these errors I would just try as best you can to minimize them, and otherwise not worry too much ab out it.
-
Crawl errors are also due to links of those pages on other sites or in google's own index. When Google revisits those pages and does not find them, they flag off as 404 errors.
-
BTW, the crawls stats show Google crawling about 3-10K pages a day. The daily errors are numbering over 100K. Is this even possible? How can if find so many errors if the spiders are not even crawling that many pages?
Thanks again!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does google still not crawl forms with a method=post?
I know back in 08 Google started crawling forms using the method=get however not method=post. whats the latest? is this still valid?
Intermediate & Advanced SEO | | Turkey0 -
Google Adsbot crawling order confirmation pages?
Hi, We have had roughly 1000+ requests per 24 hours from Google-adsbot to our confirmation pages. This generates an error as the confirmation page cannot be viewed after closing or by anyone who didn't complete the order. How is google-adsbot finding pages to crawl that are not linked to anywhere on the site, in the sitemap or linked to anywhere else? Is there any harm in a google crawler receiving a higher percentage of errors - even though the pages are not supposed to be requested. Is there anything we can do to prevent the errors for the benefit of our network team and what are the possible risks of any measures we can take? This bot seems to be for evaluating the quality of landing pages used in for Adwords so why is it trying to access confirmation pages when they have not been set for any of our adverts? We included "Disallow: /confirmation" in the robots.txt but it has continued to request these pages, generating a 403 page and an error in the log files so it seems Adsbot doesn't follow robots.txt. Thanks in advance for any help, Sam
Intermediate & Advanced SEO | | seoeuroflorist0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Website Crawl problems
I have a feeling that Google doesn't crawl my website. E.g. this blogpost - I copy a sentence from it and paste it to Google. The page that shows up in search results is www.silvamethodlife.com/page/9/ - which is just a blog page with all the articles listed, not the link to the article itself! Did anyone ever have this problem? It's definitely some technical issue. Any advice will be deeply appreciated Thanks
Intermediate & Advanced SEO | | Alexey_mindvalley0 -
Somthing weird in my Google Webmaster Tools Crawl Errors...
Hey, I recently (this past may) redesigned my e-commerce site from .asp to .php. I am trying to fix all the old pages with 301 redirects that didn't make it in the switch, but I keep getting weird pages coming up in GWT. I have about 400 pages under crawl errors that look like this "emailus.php?id=MD908070" I delete them and they come back. my site is http://www.moondoggieinc.com the id #'s are product #'s for products that are no longer on the site, but the site is .php now. They also do not show a sitemap they are linked in or any other page that they are linked from. Are these hurting me? and how do I get rid of them? Thanks! KristyO
Intermediate & Advanced SEO | | KristyO0 -
202 error page set in robots.txt versus using crawl-able 404 error
We currently have our error page set up as a 202 page that is unreachable by the search engines as it is currently in our robots.txt file. Should the current error page be a 404 error page and reachable by the search engines? Is there more value or is it a better practice to use 404 over a 202? We noticed in our Google Webmaster account we have a number of broken links pointing the site, but the 404 error page was not accessible. If you have any insight that would be great, if you have any questions please let me know. Thanks, VPSEO
Intermediate & Advanced SEO | | VPSEO0 -
How to Set Custom Crawl Rate in Google Webmaster Tools?
This is really silly question to set custom crawl rate in Google webmaster tools. Any one can find out that section under setting tab. But, I have confusion to decide number for request per second and second between requests text field. I want to set custom crawl rate for my eCommerce website. I checked my Google webmaster tools and find out as attachment. So, Can I use this facility to improve my crawling? 6233755578_33ce83bb71_b.jpg
Intermediate & Advanced SEO | | CommercePundit0