Crawl errors in GWT!
-
I have been seeing a large number of access denied and not found crawl errors. I have since fixed the issued causing these errors; however, I am still seeing the in webmaster tools.
At first I thought the data was outdated, but the data is tracked on a daily basis!
Does anyone have experience with this? Does GWT really re-crawl all those pages/links everyday to see if the errors still exist?
Thanks in advance for any help/advice.
-
Neither access denied nor not found crawl errors are dealbreakers as far as Google is concerned. A not found error usually just means you have links pointing to pages that don't exist (this is how you can be receiving more errors than pages crawled - a not found error means that a link to that page was crawled, but since there's no page there, no page was crawled). Access denied is usually caused by either requiring a login or blocking the search bots with robots.txt.
If the links causing 404 errors aren't on your site it's certainly possible that errors would still be appearing. One thing you can do is double-check your 404 page to make sure it really is returning an error of 404: not found at the URL level. One common thing I've seen all over the place is that sites will institute a 302 redirect to one 404 page (like www.example.com/notfound). Because the actual URL isn't returning a 404, bots will sometimes just keep crawling those links over and over again.
Google doesn't necessarily crawl everything every day or update everything every day. If your traffic isn't being affected by these errors I would just try as best you can to minimize them, and otherwise not worry too much ab out it.
-
Crawl errors are also due to links of those pages on other sites or in google's own index. When Google revisits those pages and does not find them, they flag off as 404 errors.
-
BTW, the crawls stats show Google crawling about 3-10K pages a day. The daily errors are numbering over 100K. Is this even possible? How can if find so many errors if the spiders are not even crawling that many pages?
Thanks again!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google crawling 200 page site thousands of times/day. Why?
Hello all, I'm looking at something a bit wonky for one of the websites I manage. It's similar enough to other websites I manage (built on a template) that I'm surprised to see this issue occurring. The xml sitemap submitted shows Google there are 229 pages on the site. Starting in the beginning of December Google really ramped up their intensity in crawling the site. At its high point Google crawled 13,359 pages in a single day. I mentioned I manage other similar sites - this is a very unusual spike. There are no resources like infinite scroll that auto generates content and would cause Google some grief. So follow up questions to my "why?" is "how is this affecting my SEO efforts?" and "what do I do about it?". I've never encountered this before, but I think limiting my crawl budget would be treating the symptom instead of finding the cure. Any advice is appreciated. Thanks! *edited for grammar.
Intermediate & Advanced SEO | | brettmandoes0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Unnatural Inbound Links Warning in GWT
Hi all, A bit of a long questions so apologies in advance but please bear with me... My client has received an 'Unnatural Inbound Links' warning and it is now my task to try and resolve through a process of; Highlighting the unnatural links Requesting that the links be removed (via webmaster requests) Possibly using the Disavow Tool Submitting a Reconsideration Request So I downloaded my clients link profile from both OSE and GWT in CSV format and compared - the amount of links returned was considerably more in GWT than it was in OSE...? So I set about going through the links, first filtering into order so that I could see blocks of links from the same URL - I highlighted in colours; Red - Definitely need to be removed Orange - Suspect, need to investigate further Yellow - Seem to be ok but may revisit Green - Happy with the link, no further action So to my question which relates to, is it 'black & white' - is it a case of 'good link v 'bad link' or could there be some middle ground? (am I making this process even more confusing than it actually is?) As an example, here are some 'Orange' URL's; http://www.24searchengines.com/ (not exact URL as it goes to the travel section which is my clients niche) - this to me looks spammy and I would normally 'paint it red' and look to remove, however, when I go to the 'contact us' page; (http://www.24searchengines.com/texis/open/allthru?area=contactus) and follow the link to remove from directory, it takes me here; http://www.dmoz.org/docs/en/help/update.html DMOZ??? My clients has a 'whole heap' of these type of links; http://www.25searchengines.com/ http://www.26searchengines.com/ http://www.27searchengines.com/ http://www.28searchengines.com/ ...and many many more!! Here is another example; http://foodys.eu/ http://foodys.eu/2007/01/04/the-smoke-ring-bbq-community/ ...plus many more... My client is in the 'cruise niche' and as there is a 'cruise' section on the site I'm not sure whether this constitutes a good, bad or indifferent link! Finally, prior to me working with this client (1 month) they moved their site from a .co.uk to a .com domain and redirected all links from the .co.uk to the .com (according to GWT, over 16k have been redirected) - a lot of these 'spammy' links were to the .co.uk and have thus been redirected, should I even consider removing the redirection or will that have severe consequences? Apologies for the long (long) post, I know I'm heading in the right direction but some assurance wouldn't go amiss! 🙂 Many thanks Andy <colgroup><col width="1317"></colgroup>
Intermediate & Advanced SEO | | TomKing
| |0 -
Google crawled my rich snippet pages and then excluded them
Hi guysWe have added schema.org mark up a few months ago and it all looked well and showed up then suddenly last month all the crawled pages disappeared from Webmaster tools Structured data (see the screenshot attached). This happened to another site of mine and I cannot figure out what causes it. Nothing has been changed on the pages and you can see by yourself in the HTML code. Any ideas to why this might happened this way?wenR89I.png?1
Intermediate & Advanced SEO | | Walltopia0 -
How accurate are the index figures in GWT?
I've been looking at a site in GWT and the number of indexed urls is very low when compared with the number or submitted urls on the xml sitemaps. The site has several stores which are all submitted using different sitemaps. When you perform a search in Google, eg site:domain.com/store1 site:domain.com/store2 site:domain.com/store3 The results are similar to the webmaster urls. However, looking in the analytics for landing pages used for organic traffic from Google shows a much higher number of pages. If these pages aren't indexed as reported in GMT, how could they be found in the results and be recorded as landing pages?
Intermediate & Advanced SEO | | edwardlewis0 -
SEOMOZ crawl all my pages
SEOMOZ crawl all my pages including ".do" (all web pages after sign up ) . Coz of this it finishes all my 10.000 crawl page quota and be exposed to dublicate pages. Google is not crawling pages that user reach after sign up. Because these are private pages for customers I guess The main question is how we can limit SEOMOZ crawl bot. If the bot can stay out of ".do" java extensions it'll perfect to starting SEO analysis. Do you know think about it? Cheers Example; .do java extension (after sign up page) (Google can't crawl) http://magaza.turkcell.com.tr/showProductDetail.do?psi=1001694&shopCategoryId=1000021&model=Apple-iPhone-3GS-8GB Normal Page (Google can crawl) http://magaza.turkcell.com.tr/telefon/Apple-iPhone-3GS-8GB/1001694/.html
Intermediate & Advanced SEO | | hcetinsoy0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0 -
Working out exactly how Google is crawling my site if I have loooots of pages
I am trying to work out exactly how Google is crawling my site including entry points and its path from there. The site has millions of pages and hundreds of thousands indexed. I have simple log files with a time stamp and URL that google bot was on. Unfortunately there are hundreds of thousands of entries even for one day and as it is a massive site I am finding it hard to work out the spiders paths. Is there any way using the log files and excel or other tools to work this out simply? Also I was expecting the bot to almost instantaneously go through each level eg. main page--> category page ---> subcategory page (expecting same time stamp) but this does not appear to be the case. Does the bot follow a path right through to the deepest level it can/allowed to for that crawl and then returns to the higher level category pages at a later time? Any help would be appreciated Cheers
Intermediate & Advanced SEO | | soeren.hofmayer0