Log files vs. GWT: major discrepancy in number of pages crawled
-
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider:
- Number of pages crawled per log files: 2993
- Crawl frequency (i.e. number of times those pages were crawled): 61438
- Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages)
We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
-
I'll reserve my answer until you hear from your dev team. A massive site for sure.
One other question/comment: just because there are 13 million URLs in your sitemap doesn't necessarily mean there are that many pages on the site. We could be talking about URI versus URL.
I'm pretty sure you know what I mean by that, but for others reading this who may not know, URI is the unique Web address of any given resource, while a URL is generally used to reference a complete Web page. An example of this would be an image. While it certainly has its own unique address on the Web, it most often does not have it's very own "page" on a Website (although there are certainly exceptions to that).
So, I could see a site having millions of URIs, but very few sites have 17 million+ pages. To put it into perspective, Alibaba and IBM roughly show 6-7 million pages indexed in Google. Walmart has between 8-9 million.
So where I'm headed in my thinking is major duplicate content issues...but, as I said, I'm going to reserve further comment until you hear back from your developers.
This is a very interesting thread so I want to know more. Cheers!
-
Waiting on an answer from our dev team on that now. In the meantime, here's what I can tell you:
-
Number submitted in XML sitemaps per GWT: 13,882,040 (number indexed: 13,204,476, or 95.1%)
-
Number indexed: 17,182,818
-
Difference: 3,300,778
-
Number of URLs throwing 404 errors: 2,810,650
-
2,810,650 / 3,300,778 = 85%
I'm sure the ridiculous number of 404s on site (I mentioned them in a separate post here) are at least partially to blame. How much, though? I know that Google says that 404s don't hurt SEO, but the fact that the number of 404s is 85% of the difference between the number indexed and submitted is not exactly a coincidence.
(Apologies if these questions seem a bit dense or elementary. I've done my share of SEO, but never on a site this massive.)
-
-
Hi. Interesting question. You had me at "log files." So before I give a longer, more detailed answer, I have a follow up question: Does your site really have 17+ million pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page disappeared from Google index. Google cache shows page is being redirected.
My URL is: http://shop.nordstrom.com/c/converse Hi. The week before last, my top Converse page went missing from the Google index. When I "fetch as Googlebot" I am able to get the page and "submit" it to the index. I have done this several times and still cannot get the page to show up. When I look at the Google cache of the page, it comes up with a different page. http://webcache.googleusercontent.com/search?q=cache:http://shop.nordstrom.com/c/converse shows: http://shop.nordstrom.com/c/pop-in-olivia-kim Back story: As far as I know we have never redirected the Converse page to the Pop-In page. However the reverse may be true. We ran a Converse based Pop-In campaign but that used the Converse page and not the regular Pop-In page. Though the page comes back with a 200 status, it looks like Google thinks the page is being redirected. We were ranking #4 for "converse" - monthly searches = 550,000. My SEO traffic for the page has tanked since it has gone missing. Any help would be much appreciated. Stephan
Technical SEO | | shop.nordstrom0 -
Is the Authority of Individual Pages Diluted When You Add New Pages?
I was wondering if the authority of individual pages is diluted when you add new pages (in Google's view). Suppose your site had 100 pages and you added 100 new pages (without getting any new links). Would the average authority of the original pages significantly decrease and result in a drop in search traffic to the original pages? Do you worry that adding more pages will hurt pages that were previously published?
Technical SEO | | Charlessipe0 -
Duplicate page content
Hello, My site is being checked for errors by the PRO dashboard thing you get here and some odd duplicate content errors have appeared. Every page has a duplicate because you can see the page and the page/~username so... www.short-hairstyles.com is the same as www.short-hairstyles.com/~wwwshor I don't know if this is a problem or how the crawler found this (i'm sure I have never linked to it). But I'd like to know how to prevent it in case it is a problem if anyone knows please? Ian
Technical SEO | | jwdl0 -
Differing numbers of pages indexed with and without the trailing slash
I noticed today that a site: query in Google (UK) for a certain domain I'm looking at returns different numbers depending on whether or not the trailing slash is added at the end. With the trailing slash the numbers are significantly different. This is a domain with a few duplicate content issues. It seems very rare but I've managed to replicate it for a couple of other well known domains, so this is the phenomenon I'm referring to: site:travelsupermarket.com - 16'300 results
Technical SEO | | ianmcintosh
site:travelsupermarket.com/ - 45'500 results site:guardian.co.uk - 120'000'000 results
site:guardian.co.uk/ - 121'000'000 results For the particular domain I'm looking at the numbers are 19'000 without the trailing slash and 800'000 with it! As mentioned, there are a few duplicate content issues at the moment that I'm trying to tidy up, but how should I interpret this? Has anyone seen this before and can advise what it could indicate? Thanks in advance for any answers.0 -
Why would SEOMoz and GWT report 404 errors for pages that are not 404ing?
Recently, I've noticed that nearly all of the 404 errors (not soft 404) reported in GWT actually resolve to a legitimate page. This was weird, but I thought it might just be old info, so I would go through the process of checking and "mark as fixed" as necessary. However, I noticed that SEOMoz is picking up on these 404 errors in the diagnostics of the site as well, and now I'm concerned with what the problem could be. Anyone have any insight into this? Rich
Technical SEO | | secretstache0 -
3 pages crawled?
For some reason, my account says it only crawled 3 pages this week, where its usually about 3K. This is my robots which shouldnt affect http://www.theprinterdepo.com/robots.txt and this is my site http://www.theprinterdepo.com any idea?
Technical SEO | | levalencia10 -
What can be the cause of my inner pages ranking higher than my home page?
If you do a search for my own company name or products we sell the inner pages rank higher than the homepage and if you do a search for exact content from my home page my home page doesn't show in the results. My homepage shows when you do a site: search so not sure what is causing this.
Technical SEO | | deciph220 -
Getting a bunch of pages re-crawled?
I added noindex tags to a bunch (1,000+) of paginated category pages on my site. I want Google to recrawl the pages so they will de-index them. Any ideas to speed up the process?
Technical SEO | | AdamThompson0