Log files vs. GWT: major discrepancy in number of pages crawled
-
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider:
- Number of pages crawled per log files: 2993
- Crawl frequency (i.e. number of times those pages were crawled): 61438
- Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages)
We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
-
I'll reserve my answer until you hear from your dev team. A massive site for sure.
One other question/comment: just because there are 13 million URLs in your sitemap doesn't necessarily mean there are that many pages on the site. We could be talking about URI versus URL.
I'm pretty sure you know what I mean by that, but for others reading this who may not know, URI is the unique Web address of any given resource, while a URL is generally used to reference a complete Web page. An example of this would be an image. While it certainly has its own unique address on the Web, it most often does not have it's very own "page" on a Website (although there are certainly exceptions to that).
So, I could see a site having millions of URIs, but very few sites have 17 million+ pages. To put it into perspective, Alibaba and IBM roughly show 6-7 million pages indexed in Google. Walmart has between 8-9 million.
So where I'm headed in my thinking is major duplicate content issues...but, as I said, I'm going to reserve further comment until you hear back from your developers.
This is a very interesting thread so I want to know more. Cheers!
-
Waiting on an answer from our dev team on that now. In the meantime, here's what I can tell you:
-
Number submitted in XML sitemaps per GWT: 13,882,040 (number indexed: 13,204,476, or 95.1%)
-
Number indexed: 17,182,818
-
Difference: 3,300,778
-
Number of URLs throwing 404 errors: 2,810,650
-
2,810,650 / 3,300,778 = 85%
I'm sure the ridiculous number of 404s on site (I mentioned them in a separate post here) are at least partially to blame. How much, though? I know that Google says that 404s don't hurt SEO, but the fact that the number of 404s is 85% of the difference between the number indexed and submitted is not exactly a coincidence.
(Apologies if these questions seem a bit dense or elementary. I've done my share of SEO, but never on a site this massive.)
-
-
Hi. Interesting question. You had me at "log files."
So before I give a longer, more detailed answer, I have a follow up question: Does your site really have 17+ million pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use Internal Search pages as Landing Pages?
Hi all Just a general discussion question about Internal Search pages and using them for SEO. I've been looking to "noindexing / follow" them, but a lot of the Search pages are actually driving significant traffic & revenue. I've over 9,000 search pages indexed that I was going to remove, but after reading this article (https://www.oncrawl.com/technical-seo/seo-internal-search-results/) I was wondering if any of you guys have had success using these pages for SEO, like with using auto-generated content. Or any success stories about using the "noindexing / follow"" too. Thanks!
Technical SEO | | Frankie-BTDublin0 -
How to find orphan pages
Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!
Technical SEO | | KJH-HAC1 -
Duplicate Content Showing up on Moz Crawl | www. vs. no-www.
Hello Moz Community! I am new to SEO, Moz and this is my first question. My questions; I have a client that is getting flagged for Duplicate Content. He is getting flagged for having two domains that have the same content i.e. www.mysite.com & mysite.com. I read into this and set up a 301 redirect through my hosting site. I evaluated which site had a stronger Page Authority and had the weaker site redirect to the stronger site. However, I am still getting hit for Duplicate pages caused by the www.mysite.com & mysite.com being duplicates. How should I go about resolving this? Is this an example of a Canonical tag needed in the head of the HTML? Any direction is appreciated. Thank You. B/R Will H.
Technical SEO | | MarketingChimp100 -
Can ht access file affect page load times
We have a large and old site. As we've transition from one CMS to another, there's been a need for create 301 redirects using our ht access file. I'm not a technical SEO person, but concerned that the size of our ht access file might be contributing source for long page download times. Can large ht access files cause slow page load times? Or is the coding of the 301 redirect a cause for slow page downloads? Thanks
Technical SEO | | ahw1 -
Google Crawling Issues! How Can I Get Google to Crawl My Website Regularly?
Hi Everyone! My website is not being crawled regularly by Google - there are weeks when it's regular but for the past month or so it does not get crawled for seven to eight days. There are some specific pages, that I want to get ranked but they of late are not being crawled AT ALL unless I use the 'Fetch As Google' tool! That's not normal, right? I have checked and re-checked the on-page metrics for these pages (and the website as a whole, backlinking is a regular and ongoing process as well! Sitemap is in place too! Resubmitted it once too! This issue is detrimental to website traffic and rankings! Would really appreciate insights from you guys! Thanks a lot!
Technical SEO | | farhanm1 -
Why would SEOMoz and GWT report 404 errors for pages that are not 404ing?
Recently, I've noticed that nearly all of the 404 errors (not soft 404) reported in GWT actually resolve to a legitimate page. This was weird, but I thought it might just be old info, so I would go through the process of checking and "mark as fixed" as necessary. However, I noticed that SEOMoz is picking up on these 404 errors in the diagnostics of the site as well, and now I'm concerned with what the problem could be. Anyone have any insight into this? Rich
Technical SEO | | secretstache0 -
How To SEO Mobile Pages?
hello, I have finally put my first foot on the path of trying to learn and understand mobile SEO. I have a few questions regarding mobile SEO and how it works, so please help me out. I use wordpress for my site, and there is a nifty plugin called WP touch http://wordpress.org/extend/plugins/wptouch/ What it basically does is, it converts your desktop version into a mobile friendly version. I wanted to know that if it does that, does this mean whatever SEO i do for my regular web site gets accomplished for my moible version as well? Another simple question is, if i search for the same term on my mobile phone then on my desktop how different will the SERs be? thanks moz peeps
Technical SEO | | david3050 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0