Log files vs. GWT: major discrepancy in number of pages crawled
-
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider:
- Number of pages crawled per log files: 2993
- Crawl frequency (i.e. number of times those pages were crawled): 61438
- Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages)
We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
-
I'll reserve my answer until you hear from your dev team. A massive site for sure.
One other question/comment: just because there are 13 million URLs in your sitemap doesn't necessarily mean there are that many pages on the site. We could be talking about URI versus URL.
I'm pretty sure you know what I mean by that, but for others reading this who may not know, URI is the unique Web address of any given resource, while a URL is generally used to reference a complete Web page. An example of this would be an image. While it certainly has its own unique address on the Web, it most often does not have it's very own "page" on a Website (although there are certainly exceptions to that).
So, I could see a site having millions of URIs, but very few sites have 17 million+ pages. To put it into perspective, Alibaba and IBM roughly show 6-7 million pages indexed in Google. Walmart has between 8-9 million.
So where I'm headed in my thinking is major duplicate content issues...but, as I said, I'm going to reserve further comment until you hear back from your developers.
This is a very interesting thread so I want to know more. Cheers!
-
Waiting on an answer from our dev team on that now. In the meantime, here's what I can tell you:
-
Number submitted in XML sitemaps per GWT: 13,882,040 (number indexed: 13,204,476, or 95.1%)
-
Number indexed: 17,182,818
-
Difference: 3,300,778
-
Number of URLs throwing 404 errors: 2,810,650
-
2,810,650 / 3,300,778 = 85%
I'm sure the ridiculous number of 404s on site (I mentioned them in a separate post here) are at least partially to blame. How much, though? I know that Google says that 404s don't hurt SEO, but the fact that the number of 404s is 85% of the difference between the number indexed and submitted is not exactly a coincidence.
(Apologies if these questions seem a bit dense or elementary. I've done my share of SEO, but never on a site this massive.)
-
-
Hi. Interesting question. You had me at "log files." So before I give a longer, more detailed answer, I have a follow up question: Does your site really have 17+ million pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page ranked then disappeared
Recently there have been a a couple of pages form my website that ranked well, in top 5 for a couple of days then they disappear suddenly, they are not at all seen in google search results no matter how narrow I search for them. I checked my search console, there seems to be no issues with the page, but when I check google analytics, I do not get any data from that page since the day it disappeared, and it does not even show up on the 'active pages' section no matter I keep the url open in multiple computers.
Technical SEO | | JoelssonMedia
Has anyone else faced this issue? is there a solution to it?0 -
Canonical Page Question
Hi, I have a question relation to Canonical pages That i need clearing up. I am not sure that my bigcommere website is correctly configured and just wanted clarification from someone in the know. Take this page for example https://www.fishingtackleshop.com.au/barra-lures/ Canonical link is https://www.fishingtackleshop.com.au/barra-lures/ The Rel="next" link is https://www.fishingtackleshop.com.au/barra-lures/?sort=bestselling&page=2 and this page has a canonical tag as rel='canonical' href='https://www.fishingtackleshop.com.au/barra-lures/?page=2' /> Is this correct as above and working as it should or should the canonical tag for the second (pagination page) https://www.fishingtackleshop.com.au/barra-lures/?page=2 in our source code be saying rel='canonical' href='https://www.fishingtackleshop.com.au/barra-lures/' />
Technical SEO | | oceanstorm0 -
GWT Malware notification for meta noindex'ed pages ?
I was wondering if GWT will send me Malware notification for pages that are tagged with meta noindex ? EG: I have a site with pages like example.com/indexed/content-1.html
Technical SEO | | Saijo.George
example.com/indexed/content-2.html
example.com/indexed/content-3.html
....
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html
.... Here all the pages like the ones below, are tagged with meta noindex and does not show up in search.
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html Now one fine day example.com/not-indexed/content-2.html page on the site gets hacked and starts to serve malware, none of the other pages are affected .. Will GWT send me a warning for this ? What if the pages are blocked by Robots.txt instead of meta noindex ? Regard
Saijo UPDATE hope this helps someone else : https://plus.google.com/u/0/109548904802332365989/posts/4m17sUtPyUS0 -
Crawl rate
Hello, In google WMT my site has the following message. <form class="form" action="/webmasters/tools/settings-ac?hl=en&siteUrl=http://www.prom-hairstyles.org/&siteUrl=http://www.prom-hairstyles.org/&hl=en" method="POST">Your site has been assigned special crawl rate settings. You will not be able to change the crawl rate.Why would this be?A bit of backgound - this site was hammered by Penguin or maybe panda but seems to be dragging itself back up (maybe) but has dropped from several thousand visitors/day to 100 or so.Cheers,Ian</form>
Technical SEO | | jwdl0 -
Blocking https from being crawled
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue www.example.com will be my domain In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff? The redirect part is what I am questioning.
Technical SEO | | Sean_Dawes0 -
Page crawling is only seeing a portion of the pages. Any Advice?
last couple of page crawls have returned 14 out of 35 pages. Is there any suggestions I can take.
Technical SEO | | cubetech0 -
Submitting Sitemap File vs Sitemap Index File
Is it better to submit all sitemap files contained in a Sitemap Index File manually to Google or is it about the same as just submitting the Master Sitemap Index File.
Technical SEO | | AU-SEO0 -
How to measure number of links out from a page
Following on from earlier Q, what do you all use to count links out from a page. I believe there is a bing tool which does this, though rather than a list of sites a simple number would be ideal?
Technical SEO | | seanmccauley0