How to determine which pages are not indexed
-
Is there a way to determine which pages of a website are not being indexed by the search engines?
I know Google Webmasters has a sitemap area where it tells you how many urls have been submitted and how many are indexed out of those submitted. However, it doesn't necessarily show which urls aren't being indexed.
-
When discussing about Google index I recommend using https://sitecheck.tools/en/check-page-indexed/. This service is completely free and can handle anything from 100 to 100 million pages. It’s an efficient way to determine which of your pages are indexed by Google. Whether you're managing a small site or a large portal, this tool offers a practical solution for monitoring your site’s indexing status.
-
The better way is to check in the Search Console. For example, Bing Webmaster and Google Search Console have special tabs where you can see what pages in indexed and what pages are not indexed.
Also has a few services that can help you make it more UX-friendly. For example my service https://sitecheck.tools/ if you need help, please let me know. -
@mfrgolfgti Lol, yes that does work but not for indexing?
-
Hi, I know this is an old question but I wanted to ask about the first paragraph of your answer: "You can start by trying the "site:domain.com" search. This won't show you all the pages which are indexed, but it can help you determine which ones aren't indexed."
Do you happen to know why doing a site:domain.com search doesn't show all the indexed pages? I've just discovered this for our website. Down the site: command shows 73 pages but checking through the list, there are lots of pages not included. However if I do the site:domain.com/page.html command for those individual pages, they do come up in the search results page. I don't understand why though?
-
I'm running into this same issue where I have about a quarter of a client's site not indexing. Using the site:domain.com trick shows me 336 results - which I somehow need to add to a csv file, compare against the URLs crawled by screaming frog, and then use VLOOKUP to find the unique values.
So how can I get those 300+ results exported to a csv file for analysis?
-
Deep crawl will provide the information with one tool. It's not in expensive but it's definitely the best tool out there you have to connected to Google analytics in order for it to give you this information but it will show you how many of your your url are index and how many are not & should be.
If contentEd to Google Webmaster tools, Google analytics & then any of t analytics he many ways of scraping or indexing the site.
Technically that is more than one tool but it is a good way.
All the best,
tom
-
Crawl the domain using SF and then use URL profiler to check their indexation status.
You'll need proxies.
Can be done with Scrape box too
Otherwise you can probably use Sheets with some importxml wizardry to create a query on Google
-
hi Paul,
I too have not had any luck with Screaming Frog actually checking every link that it claims it will. You're exactly right it will check the homepage or the single link that you choose. However it will not from my experience check everything. I have a friend who has the paid version I will ask him.
I'll be sure to let you know. Because I do agree with you I just found this out myself in fact it is misleading to say check all and really check just one.
Excellent tutorial by the way of how to do this seemingly easy task however when attempted is truly not easy at all.
Sincerely,
Thomas
PS I get this result site:www.example.com
he gives me the opportunity to see all the indexed pages Google has processed I however would have to compare them to a csv file in order to actually know what is missing.
I really like your example and definitely will use that in the future.
-
Thanks for the reminder that Screaming Frog has that "Check Index" functionality, Thomas.
Unfortunately, I've never been able to get that method to check more than one link at a time, as all it does is send the request to a browser to check. Even highlighting multiple URLs and checking for indexation only checks the first one. Great for spot checks, but not what Seth is looking for, I don't think. My other post details an automatic way to check a site's hundreds (or thousands) of pages at a time.
I only have the free version of Screaming Frog on this machine at the moment so would be very interested to know if the paid version changes this.
Paul
-
Dear Paul,
thank you for taking the time to address this.
I did become extremely hastily when I wrote my 1st answer I copy and pasted off of a dictation software that I use. I then went on to wrongfully say this is the correct way to do something. However screaming frog SEO spider
Is a tool that I referenced early on this tool allows you to see 100% of all the links you are hosting at the time you run the scan.
And includes the ability to check if it is indexed with Google, Bing and Yahoo when I referenced this software nobody took notice as I probably looked like I did not know what I was talking about.
In hindsight I should have kept bringing up screaming frog however I did not I simply brought up other ways to check lost links. In my opinion going into Google and clicking one by one on what you do or do not know is indexed is a very long and arduous task.
Screaming frog allows you to click internal links then right-click check if indexed there will be a table that comes down on the right side. You can select from the 3 big search engines you can do many more things with this fantastic tool but I did not illustrate as well as I am right now exactly how this tool should be used or what its capabilities are. I truly thought once I had referenced it somebody would look into it and they would see what I was speaking about however hindsight is 2020 I appreciate your comment very much and hope you can see that yes I'm here mistaken the beginning however I did come up with an automated tool to give him the answer the question asked.
Screaming frog can be used on PC, Mac or Linux it is free to download and comes in a pay version with even more abilities then water are showcased in the free edition. It is only 2 Mb in size and uses almost no RAM on a Mac I don't know how big it is on the PC
here's the link to the software
http://www.screamingfrog.co.uk/seo-spider/
I hope that you will accept my apologies for not paying this much attention as I should have to what I pasted and hope this tool will be of use to you.
Respectfully,
Thomas
-
There is no individual tool capable of providing the info you're looking for, Seth. At least as far as I've ever come across.
HOWEVER! It is possible to do it if you are willing to do some of the work on your own to collect and manipulate data using several tools. Essentially this method automates the approach Takeshi has mentioned.
The short answer
First you'll create a list of all the pages on your website. Then you'll create a list of all the URLs that Google says are indexed. From there, you will use Excel to subtract the indexed URLs from the known URLs, leaving a list of non-indexed URLS, which is what you asked for.Ready? Here's how.
Collect a list of all your site's pages You can do this in several ways. If you have a reliable and complete sitemap, you can get this data there. If your CMS is capable of outputting such a list, great. If neither of these is an option, you can use the Screaming Frog spider to get the data (remember the free version will only collect up to 500 pages). Xenu Linksleuth is also an alternative. Put all these URLs into a spreadsheet.
Collect a list of all pages Google has indexed.
You'll do this using a scraper tool that will "scrape" all the URLs off a Google SERP page. There are many tools to do this; which one is best will depend largely on how big your site is. Assuming your site is only 7 or 800 pages, I recommend the brilliantly simple SERPS Redux bookmarklet from Liam Delahunty.Clicking on the bookmarklet while on a SERP page will automatically scrape all the URLs into an easily copyable format. The trick is, you want the SERP page to display as many results as possible, otherwise you'll have to iterate through many, many pages to catch everything.
So - pro tip - if you go to the setting icon while on any Google search page, and select Search Settings you will see the option to have your searches return up to 100 results instead of the usual 10. You have to select Never Show Instant Results in order for the Results per Page slider to become active.
Now, in Google's search box, you'll enter site:mysite.com as Takeshi explained. (NOTE: use the canonical version of your domain, so include the www if that's the primary version of your site) You should now have a page listing 100 URLs of your site that are indexed.
- Click the SERPRedux bookmarklet to collect them all, then copy and paste the URLs into a spreadsheet.
- Go back to the site:mydomain results page, click for page 2, and repeat, adding the additional URLs to the same spreadsheet.
- Repeat this process until you have collected all the URLs Google lists
Remove duplicates to leave just un-indexed URLs
Now you have a spreadsheet with all known URLs and all indexed URLs. Use Excel to remove all the duplicates, and what you will be left with is all the URLs that Google doesn't list as being indexed.Voila !
A few notes:
- The site: search operator doesn't guarantee that you'll actually get all indexed URLs, but it's the closest you'll be able to get. For an interesting experiment, re-run this process with the non-canonical version of your site address as well, to see where you might be indexed for duplicates.
- If your site is bigger, or you will need to do this multiple times, there are tools that will scrape all the SERPS pages at once so you don't have to iterate through them. The scraper components of SEER's SEO Toolbox or Neil Bosma's SEO Tools for Excel are good starting points. There is also a paid tool called ScrapeBox designed specifically for this kind of scraping. It's a blackhat tool, but in the right hands, is also powerful for whitehat purposes
- Use Takeshi's suggestion of running some of the resulting non-indexed list through manual site: searches to confirm the quality of your list
Whew! I know that's a lot to throw at you as an answer to what probably seemed like a simple question, but I wanted to work through the steps for you, rather than just hint at how it could be done.
Be sure to ask about any of the areas where my explanation isn't clear enough.
Paul
-
Thomas, as Takeshi has tried to point out, you have misread the original question. The original poster is asking for a way to find the actual URLS of pages from his site that are NOT indexed in the search engines.
He is not looking for the number of URLS that are indexed.
None of the tools you have repeatedly mentioned are capable of providing this information, which is likely why you're response was downvoted.
Best to carefully read the original question to ensure you are answering what is actually being asked, rather than what you assume is being asked. Otherwise you add significant confusion to the attempt to provide an answer to the original poster.
Paul
-
http://www.screamingfrog.co.uk/
Google analytics should be able to tell you the answers to this as well. I'm sorry I do not think that earlier however I stand by my Google Webmaster tools especially after consulting with a few more people.
you can use
then when done go to seo Scroll to bottom you will see exactly how many pages have been indexed successfully by Google.
Mr. Young,
I would like to know if this person does not have a 301 redirect Wood your site scan work successfully? Because under your directions it would not and I'm not giving you thumbs down on it you know
-
I hope the two links below will give you the information that you are looking for. I believe that you will find quite a bit from the second link and the first link will give you a free resource and finding exactly how many links pages have been indexed as far as how many have not you can only find that using the second link
http://www.northcutt.com/tools/free-seo-tools/google-indexed-pages-checker/
along with
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
Go to advanced and it will offer you a show all
-
He's looking for a way to find which pages aren't indexed, not how many pages are indexed.
-
Go to Google Webmaster tools and go to health underneath that go to index status you will find the answer that you've been looking for please remove the thumbs down from my answer because it is technically correct.
Index Status
Index Status
Showing data from the last year
<form id="view-options-form" action="https://www.google.com/webmasters/tools/index-status" method="GET">BasicAdvanced <label for="indexed-checkbox">Total indexed</label> this is your # <label for="crawled-checkbox">Ever crawled</label> <label for="roboted-checkbox">Blocked by robots</label> <label for="removed-checkbox">Removed</label> </form>
-
Connect Google analytics to Deepcrawl.com and it will give you the exact number when it is done indexing in (universal index.)
Take a tool like screaming frog SEO spider then run your site night through the tool.
One of the two tools about and I use the internal links to get your page number. You want to make sure they are HTML pages not just Uris then One of the two tools about and I use the internal links to get your page number. You want to make sure they are HTML pages not take the # and subtract it by amount google shows when you Ginger tonight: www.example.com and in the Google search no "" or ()( in your search "( site:www.example.com )" and in the Google search bar you will see a # that is your indexed urls a fast way is URLs indexed a very fast way is
would be to go to marketinggrader.com add your site & let it run then click "SEO"
you will then see the # of pages in Googles index
Login to Google Webmaster tools. And select indexed content it will show you exactly how many pages in your site map have been indexed and exactly how many pages in total has been indexed. You will not miss a thing inside Google Webmaster tools using the other techniques you could this things if you did not include the www.for instance useing site: on google you could find out with you did not have a 301 redirect Will not give you the correct answer.
use GWT
-
You can start by trying the "site:domain.com" search. This won't show you all the pages which are indexed, but it can help you determine which ones aren't indexed.
Another thing you can do is go into Google Analytics and see which of your pages have not received any organic visits. If a page has not received any clicks at all, there's a good chance it hasn't been indexed yet (or just isn't ranking well).
Finally, you can use the "site:domain.com/page.html" command to figure out whether a specific page is not being indexed. You can also do "site:domain.com/directory" to see whether any pages within a specific directory are being indexed.
-
You could use Linksleuth to crawl your site. It will tell you how many pages it found, then match it against the total of pages google has indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
[Organization schema] Which Facebook page should be put in "sameAs" if our organization has separate Facebook pages for different countries?
We operate in several countries and have this kind of domain structure:
Technical SEO | | Telsenome
example.com/us
example.com/gb
example.com/au For our schemas we've planned to add an Organization schema on our top domain, and let all pages point to it. This introduces a problem and that is that we have a separate Facebook page for every country. Should we put one Facebook page in the "sameAs" array? Or all of our Facebook pages? Or should we skip it altogether? Only one Facebook page:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
], All Facebook pages:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
"https://www.facebook.com/xxx_gb"
"https://www.facebook.com/xxx_au"
], Bonus question: This reasoning springs from the thought that we only should have one Organization schema? Or can we have a multiple sub organizations?0 -
Japanese URL-structured sitemap (pages) not being indexed by Bing Webmaster Tools
Hello everyone, I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points: The website is based on a subdirectory URL ( example.com/ja/ ) The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji. Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed). When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either. My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this. When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded. I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps? Many thanks everyone for any advice!
Technical SEO | | Hermski0 -
Index problems
“The website http://www.vaneyckshutters.com/nl/ does not show in the index of Google (site:vaneyckshutters.com/nl/). This must be the homepage in the Netherlands. Previously, the page www.vaneyckshutters.com was redirected to /nl/. This page is accessible now with a canonical tag to http://www.vaneyckshutters.com/nl/ in the hope to let /nl/ be indexed. When we look at the SERPS for keyword ‘shutters’, the page http://www.vaneyckshutters.com/ is shown in Google.nl on #32 and in Belgium #3. Problem & question: Why is it that /nl/ has not been indexed properly and why is it that we rank with http://www.vaneyckshutters.com on ‘shutters’ instead the/nl/ page?”
Technical SEO | | Happy-SEO1 -
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect?
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect? If this scenario requires a 301 redirect no matter what, I might as well update the URL to be a little more keyword rich for the page while I'm at it. However, since these pages are ranking well I'd rather not lose any authority in the process and keep the URL just stripped of the ".html" (if that's possible). Thanks for you help! [edited for formatting]
Technical SEO | | Booj0 -
No existing pages in Google index
I have a real estate portal. I have a few categories - for example: flats, houses etc. Url of category looks like that: mydomain.com/flats/?page=1 Each category has about 30-40 pages - BUT in Google index I found url like: mydomain.com/flats/?page=1350 Can you explain it? This url contains just headline etc - but no content! (it´s just generated page by PHP) How is it possible, that Google can find and index these pages? (on the web, there are no backlinks on these pages) thanks
Technical SEO | | visibilitysk0 -
Getting More Pages Indexed
We have a large E-commerce site (magento based) and have submitted sitemap files for several million pages within Webmaster tools. The number of indexed pages seems to fluctuate, but currently there is less than 300,000 pages indexed out of 4 million submitted. How can we get the number of indexed pages to be higher? Changing the settings on the crawl rate and resubmitting site maps doesn't seem to have an effect on the number of pages indexed. Am I correct in assuming that most individual product pages just don't carry enough link juice to be considered important enough yet by Google to be indexed? Let me know if there are any suggestions or tips for getting more pages indexed. syGtx.png
Technical SEO | | Mattchstick0 -
Page not being indexed
Hi all, On our site we have a lot of bookmaker reviews, and we are ranking pretty good for most bookmaker names as keywords, however a single bookmaker seems to have been shunned by Google. For a search "betsafe" in Denmark, this page does not appear among the top 50: http://www.betxpert.com/bookmakere/betsafe All of our other review pages rank in top 10-20 for the bookmaker name as keyword. What to do if Google has "banned" a page? Best regards, Rasmus
Technical SEO | | rasmusbang0 -
301 lots of old pages to home page
Will it hurt me if i redirect a few hundred old pages to my home page? I currently have a mess on my hands with many 404's showing up after moving my site to a new ecommerce server. We have been at the new server for 2 years but still have 337 404s showing up in google webmaster tools. I don't think it would affect users as very few people woudl find those old links but I don't want to mess with google. Also, how much are those 404s hurting my rank?
Technical SEO | | bhsiao1