Some URLs in the sitemap not indexed
-
Our company site has hundreds of thousands of pages. Yet no matter how big or small the total page count, I have found that the "URLs Indexed" in GWMT has never matched "URLS in Sitemap". When we were small and now that we have a LOT more pages, there is always a discrepancy of ~10% or so missing from the index.
It's difficult to know which pages are not indexed, but I have found some that I can verify are in the Sitemap.xml file but not at all in the index. When I go to GWMT I can "Fetch and Render" missing pages fine - it's not as though it's blocked or inaccessible.
Any ideas on why this is? Is this type of discrepancy typical?
-
Thanks. Very helpful!
-
This is great to know that 10% is a good discrepancy. Hard to know otherwise.
That article about Screaming Frog is super helpful, thanks!
-
I have never had a site with 100% crawled pages, sometimes Google will drop a page off for being too similar to another, not informative enough, canonical links set, redirects.
As Ryan says, don't just rely on Moz use Screaming Frog to get a good view of your site too, see if there are any errors. Also you can run the frog whenever you like, it's just a little more technical to understand.
Xenu oooh never heard of that one Ryan thanks!
Just looked into Xenu, Screaming frog does it all and some.
-
Hi Mase,
I've managed sites with with hundreds of thousands of pages too, and in my experience a discrepancy between what's offered up via the sitemaps and what gets indexed is typical (dare I say it, a 10% discrepancy seems pretty good!). Pages deeper in the site seem to suffer this fate more frequently than those with fewer subfolders, as do those with thin content.
I agree completely with Ryan's comment about Screaming Frog: it is an invaluable tool for site audits, in addition to lots of other useful site insights. You might find this article interesting to get a sense of the many ways you can use SF: http://www.seerinteractive.com/blog/screaming-frog-guide/
-
You're welcome. Definitely take a look at a crawler that gives you more insight, especially with a site as large as yours. Just note, no matter what you might never achieve an exact match between the pages you've submitted and the number indexed as Google can decide not to index a page for other reasons aside from the page's presence in a site map. Something useful for you as well would be to look at how many of your pages recieve visits in analytics. That will give you an idea of percentages on pages in the sitemap vs the index vs active.
-
I have not run the site through those tools you mentioned, I'm unfamiliar.
I am not, however, receiving any errors on those pages. And when I "Fetch and Render" in GWMT, they look and render fine without errors. I'm able to submit them to the index one-by-one.
Thanks for your response, Ryan.
-
Hi Mase. Are you getting errors on URLs you've submitted? Or ran other crawlers on your site like Xenu or ScreamingFrog to produce any possible errors? It's also good to know which pages might not have enough content to be indexed: filters, sorting views, etc.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Parameters
On our webshop we've added some URL-parameters. We've set URL's like min_price, filter_cat, filter_color etc. on "don't Crawl" in our Google Search console. We see that some parameters have 100.000+ URL's and some have 10.000+ Is it better to add these parameters in the robots.txt file? And if that's better, how can we write it down so the URL's will not be crawled. Our robotos.txt files shows now: # Added by SEO Ultimate's Link Mask Generator module User-agent: * Disallow: /go/ # End Link Mask Generator output User-agent: * Disallow: /wp-admin/
Technical SEO | | Happy-SEO1 -
URL not indexed but shows in results?
We are working on a site that has a whole section that is not indexed (well a few pages are). There is also a problem where there are 2 directories that are the same content and it is the incorrect directory with the indexed URLs. The problem is if I do a search in Google to find a URL - typically location + term then I get the URL (from the wrong directory) up there in the top 5. However, do a site: for that URL and it is not indexed! What could be going on here? There is nothing in robots or the source, and GWT fetch works fine.
Technical SEO | | MickEdwards0 -
Sitemap international websites
Hey Mozzers,Here is the case that I would appreciate your reply for: I will build a sitemap for .com domain which has multiple domains for other countries (like Italy, Germany etc.). The question is can I put the hreflang annotations in sitemap1 only and have a sitemap 2 with all URLs for EN/default version of the website .COM. Then put 2 sitemaps in a sitemap index. The issue is that there are pages that go away quickly (like in 1-2 days), they are localised, but I prefer not to give annotations for them, I want to keep clear lang annotations in sitemap 1. In this way, I will replace only sitemap 2 and keep sitemap 1 intact. Would it work? Or I better put everything in one sitemap?The second question is whether you recommend to do the same exercise for all subdomains and other domains? I have read much on the topic, but not sure whether it worth the effort.The third question is if I have www.example.it and it.example.com, should I include both in my sitemap with hreflang annotations (the sitemap on www.example.com) and put there it for subdomain and it-it for the .it domain (to specify lang and lang + country).Thanks a lot for your time and have a great day,Ani
Technical SEO | | SBTech0 -
URL removals
Hello there, I found out that some pages of the site have two different URL's pointing at the same page generating duplicate content, title and description. Is there a way to block one of them? cheers
Technical SEO | | PremioOscar0 -
Update index date
If I update the content of a page without changing the initial url and google crawls my new page, will the index date (that appears in the SERP) change to the latest update? In positive case how many change should I do to consider an update? tks
Technical SEO | | fabrico230 -
Friendly URLs
Hi, I have an important news site and I am trying to implement user friendly URLs. Now, when you click a news in the homepage, it goes to a redirect.php page and then goes to a friendly url. the question is, It is better to have the friendly URL in the first link or it is the same for the robot having this in the finally url? Thanks
Technical SEO | | informatica8100 -
What to do if my site was De-indexed?
Hello fellow SEOs, I have been doing SEO for about a year now, I'm not expert, but I know enough to get the job done. I'm learning everyday about better techniques. So enough about that... Tonight I noticed that my site has, I believe, been de-indexed. Its a fairly new site, as we just launched it a few days ago and I went in and did all the title tags and meta. I still have to go in to do the h1 and h2 tags...plus add some alt tags and anchor text. Well anyways, after a couple of days after the title tags were implemented. I was propagating all over the place. Using my keyword tool here...I was number on the first page in Google for 71 or the 88 keywords. My new site was just indexed yesterday and thats when i noticed all my keywords. Well today I noticed that I am no where to be found, even if i type in my company's name. PLEASE help me out...any advice would be appreciated. Thank you. p.s. could my competitors could have done something to my site? just wondering... The website is www.eggheadconsultants.com
Technical SEO | | Jegghead1 -
Rel=canonical + no index
We have been doing an a/b test of our hp and although we placed a rel=canonical tag on the testing page it is still being indexed. In fact at one point google even had it showing as a sitelink . We have this problem through out our website. My question is: What is the best practice for duplicate pages? 1. put only a rel= canonical pointing to the "wanted original page" 2. put a rel= canonical (pointing to the wanted original page) and a no index on the duplicate version Has anyone seen any detrimental effect doing # 2? Thanks
Technical SEO | | Morris770