How to determine which pages are not indexed
-
Is there a way to determine which pages of a website are not being indexed by the search engines?
I know Google Webmasters has a sitemap area where it tells you how many urls have been submitted and how many are indexed out of those submitted. However, it doesn't necessarily show which urls aren't being indexed.
-
When discussing about Google index I recommend using https://sitecheck.tools/en/check-page-indexed/. This service is completely free and can handle anything from 100 to 100 million pages. It’s an efficient way to determine which of your pages are indexed by Google. Whether you're managing a small site or a large portal, this tool offers a practical solution for monitoring your site’s indexing status.
-
The better way is to check in the Search Console. For example, Bing Webmaster and Google Search Console have special tabs where you can see what pages in indexed and what pages are not indexed.
Also has a few services that can help you make it more UX-friendly. For example my service https://sitecheck.tools/ if you need help, please let me know. -
@mfrgolfgti Lol, yes that does work but not for indexing?
-
Hi, I know this is an old question but I wanted to ask about the first paragraph of your answer: "You can start by trying the "site:domain.com" search. This won't show you all the pages which are indexed, but it can help you determine which ones aren't indexed."
Do you happen to know why doing a site:domain.com search doesn't show all the indexed pages? I've just discovered this for our website. Down the site: command shows 73 pages but checking through the list, there are lots of pages not included. However if I do the site:domain.com/page.html command for those individual pages, they do come up in the search results page. I don't understand why though?
-
I'm running into this same issue where I have about a quarter of a client's site not indexing. Using the site:domain.com trick shows me 336 results - which I somehow need to add to a csv file, compare against the URLs crawled by screaming frog, and then use VLOOKUP to find the unique values.
So how can I get those 300+ results exported to a csv file for analysis?
-
Deep crawl will provide the information with one tool. It's not in expensive but it's definitely the best tool out there you have to connected to Google analytics in order for it to give you this information but it will show you how many of your your url are index and how many are not & should be.
If contentEd to Google Webmaster tools, Google analytics & then any of t analytics he many ways of scraping or indexing the site.
Technically that is more than one tool but it is a good way.
All the best,
tom
-
Crawl the domain using SF and then use URL profiler to check their indexation status.
You'll need proxies.
Can be done with Scrape box too
Otherwise you can probably use Sheets with some importxml wizardry to create a query on Google
-
hi Paul,
I too have not had any luck with Screaming Frog actually checking every link that it claims it will. You're exactly right it will check the homepage or the single link that you choose. However it will not from my experience check everything. I have a friend who has the paid version I will ask him.
I'll be sure to let you know. Because I do agree with you I just found this out myself in fact it is misleading to say check all and really check just one.
Excellent tutorial by the way of how to do this seemingly easy task however when attempted is truly not easy at all.
Sincerely,
Thomas
PS I get this result site:www.example.com
he gives me the opportunity to see all the indexed pages Google has processed I however would have to compare them to a csv file in order to actually know what is missing.
I really like your example and definitely will use that in the future.
-
Thanks for the reminder that Screaming Frog has that "Check Index" functionality, Thomas.
Unfortunately, I've never been able to get that method to check more than one link at a time, as all it does is send the request to a browser to check. Even highlighting multiple URLs and checking for indexation only checks the first one. Great for spot checks, but not what Seth is looking for, I don't think. My other post details an automatic way to check a site's hundreds (or thousands) of pages at a time.
I only have the free version of Screaming Frog on this machine at the moment so would be very interested to know if the paid version changes this.
Paul
-
Dear Paul,
thank you for taking the time to address this.
I did become extremely hastily when I wrote my 1st answer I copy and pasted off of a dictation software that I use. I then went on to wrongfully say this is the correct way to do something. However screaming frog SEO spider
Is a tool that I referenced early on this tool allows you to see 100% of all the links you are hosting at the time you run the scan.
And includes the ability to check if it is indexed with Google, Bing and Yahoo when I referenced this software nobody took notice as I probably looked like I did not know what I was talking about.
In hindsight I should have kept bringing up screaming frog however I did not I simply brought up other ways to check lost links. In my opinion going into Google and clicking one by one on what you do or do not know is indexed is a very long and arduous task.
Screaming frog allows you to click internal links then right-click check if indexed there will be a table that comes down on the right side. You can select from the 3 big search engines you can do many more things with this fantastic tool but I did not illustrate as well as I am right now exactly how this tool should be used or what its capabilities are. I truly thought once I had referenced it somebody would look into it and they would see what I was speaking about however hindsight is 2020 I appreciate your comment very much and hope you can see that yes I'm here mistaken the beginning however I did come up with an automated tool to give him the answer the question asked.
Screaming frog can be used on PC, Mac or Linux it is free to download and comes in a pay version with even more abilities then water are showcased in the free edition. It is only 2 Mb in size and uses almost no RAM on a Mac I don't know how big it is on the PC
here's the link to the software
http://www.screamingfrog.co.uk/seo-spider/
I hope that you will accept my apologies for not paying this much attention as I should have to what I pasted and hope this tool will be of use to you.
Respectfully,
Thomas
-
There is no individual tool capable of providing the info you're looking for, Seth. At least as far as I've ever come across.
HOWEVER! It is possible to do it if you are willing to do some of the work on your own to collect and manipulate data using several tools. Essentially this method automates the approach Takeshi has mentioned.
The short answer
First you'll create a list of all the pages on your website. Then you'll create a list of all the URLs that Google says are indexed. From there, you will use Excel to subtract the indexed URLs from the known URLs, leaving a list of non-indexed URLS, which is what you asked for.Ready? Here's how.
Collect a list of all your site's pages You can do this in several ways. If you have a reliable and complete sitemap, you can get this data there. If your CMS is capable of outputting such a list, great. If neither of these is an option, you can use the Screaming Frog spider to get the data (remember the free version will only collect up to 500 pages). Xenu Linksleuth is also an alternative. Put all these URLs into a spreadsheet.
Collect a list of all pages Google has indexed.
You'll do this using a scraper tool that will "scrape" all the URLs off a Google SERP page. There are many tools to do this; which one is best will depend largely on how big your site is. Assuming your site is only 7 or 800 pages, I recommend the brilliantly simple SERPS Redux bookmarklet from Liam Delahunty.Clicking on the bookmarklet while on a SERP page will automatically scrape all the URLs into an easily copyable format. The trick is, you want the SERP page to display as many results as possible, otherwise you'll have to iterate through many, many pages to catch everything.
So - pro tip - if you go to the setting icon while on any Google search page, and select Search Settings you will see the option to have your searches return up to 100 results instead of the usual 10. You have to select Never Show Instant Results in order for the Results per Page slider to become active.
Now, in Google's search box, you'll enter site:mysite.com as Takeshi explained. (NOTE: use the canonical version of your domain, so include the www if that's the primary version of your site) You should now have a page listing 100 URLs of your site that are indexed.
- Click the SERPRedux bookmarklet to collect them all, then copy and paste the URLs into a spreadsheet.
- Go back to the site:mydomain results page, click for page 2, and repeat, adding the additional URLs to the same spreadsheet.
- Repeat this process until you have collected all the URLs Google lists
Remove duplicates to leave just un-indexed URLs
Now you have a spreadsheet with all known URLs and all indexed URLs. Use Excel to remove all the duplicates, and what you will be left with is all the URLs that Google doesn't list as being indexed.Voila !
A few notes:
- The site: search operator doesn't guarantee that you'll actually get all indexed URLs, but it's the closest you'll be able to get. For an interesting experiment, re-run this process with the non-canonical version of your site address as well, to see where you might be indexed for duplicates.
- If your site is bigger, or you will need to do this multiple times, there are tools that will scrape all the SERPS pages at once so you don't have to iterate through them. The scraper components of SEER's SEO Toolbox or Neil Bosma's SEO Tools for Excel are good starting points. There is also a paid tool called ScrapeBox designed specifically for this kind of scraping. It's a blackhat tool, but in the right hands, is also powerful for whitehat purposes
- Use Takeshi's suggestion of running some of the resulting non-indexed list through manual site: searches to confirm the quality of your list
Whew! I know that's a lot to throw at you as an answer to what probably seemed like a simple question, but I wanted to work through the steps for you, rather than just hint at how it could be done.
Be sure to ask about any of the areas where my explanation isn't clear enough.
Paul
-
Thomas, as Takeshi has tried to point out, you have misread the original question. The original poster is asking for a way to find the actual URLS of pages from his site that are NOT indexed in the search engines.
He is not looking for the number of URLS that are indexed.
None of the tools you have repeatedly mentioned are capable of providing this information, which is likely why you're response was downvoted.
Best to carefully read the original question to ensure you are answering what is actually being asked, rather than what you assume is being asked. Otherwise you add significant confusion to the attempt to provide an answer to the original poster.
Paul
-
http://www.screamingfrog.co.uk/
Google analytics should be able to tell you the answers to this as well. I'm sorry I do not think that earlier however I stand by my Google Webmaster tools especially after consulting with a few more people.
you can use
then when done go to seo Scroll to bottom you will see exactly how many pages have been indexed successfully by Google.
Mr. Young,
I would like to know if this person does not have a 301 redirect Wood your site scan work successfully? Because under your directions it would not and I'm not giving you thumbs down on it you know
-
I hope the two links below will give you the information that you are looking for. I believe that you will find quite a bit from the second link and the first link will give you a free resource and finding exactly how many links pages have been indexed as far as how many have not you can only find that using the second link
http://www.northcutt.com/tools/free-seo-tools/google-indexed-pages-checker/
along with
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
Go to advanced and it will offer you a show all
-
He's looking for a way to find which pages aren't indexed, not how many pages are indexed.
-
Go to Google Webmaster tools and go to health underneath that go to index status you will find the answer that you've been looking for please remove the thumbs down from my answer because it is technically correct.
Index Status
Index Status
Showing data from the last year
<form id="view-options-form" action="https://www.google.com/webmasters/tools/index-status" method="GET">BasicAdvanced <label for="indexed-checkbox">Total indexed</label> this is your # <label for="crawled-checkbox">Ever crawled</label> <label for="roboted-checkbox">Blocked by robots</label> <label for="removed-checkbox">Removed</label> </form>
-
Connect Google analytics to Deepcrawl.com and it will give you the exact number when it is done indexing in (universal index.)
Take a tool like screaming frog SEO spider then run your site night through the tool.
One of the two tools about and I use the internal links to get your page number. You want to make sure they are HTML pages not just Uris then One of the two tools about and I use the internal links to get your page number. You want to make sure they are HTML pages not take the # and subtract it by amount google shows when you Ginger tonight: www.example.com and in the Google search no "" or ()( in your search "( site:www.example.com )" and in the Google search bar you will see a # that is your indexed urls a fast way is URLs indexed a very fast way is
would be to go to marketinggrader.com add your site & let it run then click "SEO"
you will then see the # of pages in Googles index
Login to Google Webmaster tools. And select indexed content it will show you exactly how many pages in your site map have been indexed and exactly how many pages in total has been indexed. You will not miss a thing inside Google Webmaster tools using the other techniques you could this things if you did not include the www.for instance useing site: on google you could find out with you did not have a 301 redirect Will not give you the correct answer.
use GWT
-
You can start by trying the "site:domain.com" search. This won't show you all the pages which are indexed, but it can help you determine which ones aren't indexed.
Another thing you can do is go into Google Analytics and see which of your pages have not received any organic visits. If a page has not received any clicks at all, there's a good chance it hasn't been indexed yet (or just isn't ranking well).
Finally, you can use the "site:domain.com/page.html" command to figure out whether a specific page is not being indexed. You can also do "site:domain.com/directory" to see whether any pages within a specific directory are being indexed.
-
You could use Linksleuth to crawl your site. It will tell you how many pages it found, then match it against the total of pages google has indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages are Indexed but not Cached by Google. Why?
Hello, We have magento 2 extensions website mageants.com since 1 years google every 15 days cached my all pages but suddenly last 15 days my websites pages not cached by google showing me 404 error so go search console check error but din't find any error so I have cached manually fetch and render but still most of pages have same 404 error example page : - https://www.mageants.com/free-gift-for-magento-2.html error :- http://webcache.googleusercontent.com/search?q=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&rlz=1C1CHBD_enIN803IN804&oq=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&aqs=chrome..69i57j69i58.1569j0j4&sourceid=chrome&ie=UTF-8 so have any one solutions for this issues
Technical SEO | | vikrantrathore0 -
Drop in traffic, spike in indexed pages
Hi, We've noticed a drop in traffic compared to the previous month and the same period last year. We've also noticed a sharp spike in indexed pages (almost doubled) as reported by Search Console. The two seemed to be linked, as the drop in traffic is related to the spike in indexed pages. The only change we made to our site during this period is we reskinned out blog. One of these changes is that we've enable 'normal' (not ajax) pagination. Our blog has a lot of content on, and we have about 550 odd pages of posts. My question is, would this impact the number of pages indexed by Google, and if so could this negatively impact organic traffic? Many thanks, Jason
Technical SEO | | Clickmetrics0 -
Why my website does not index?
I made some changes in my website after that I try webmaster tool FETCH AS GOOGLE but this is 2nd day and my new pages does not index www. astrologersktantrik .com
Technical SEO | | ramansaab0 -
Search results indexed
Hi there, is is bad practice in seo to have search results for products indexed? For example a search result of holidays to Ibiza, with lots of deals coming up? its a search query url that would be indexed, with just an image and price per product on the page, with about 10 per page? Any advice appreciated.
Technical SEO | | pauledwards0 -
Removing some of the indexed pages from my website
I am planning to remove some of the webpages from my website and these webpages are already indexed with search engine. Is there any way by which I need to inform search engine that these pages are no more available.
Technical SEO | | ArtiKalra0 -
Google indexing directory folder listing page
Google somehow managed to find several of our images index folders and decided to include them into their index. Example: websitesite.com/category/images/ is what you'll see when doing a site:website.com search. So, I have two-part question: 1) Does this hurt our site's ability to rank in any way?
Technical SEO | | invision
Because all Google sees is just a directory listing page with a bunch of links to images in the folder. 2) If there could be any negative effect, what is the best way to get these folders out of Google's index?
I could block via robots.txt, but I'm afraid it will also block all the images in that folder from being indexed in Google image search. I could also turn off directory listing in cpanel / htaccess, but then that gives is a 403 forbidden. Will this hurt the site in anyway and would it prevent Google from indexing the images in the directory? Thanks,
Tony0 -
Why googlebot indexing one page, not the other?
Why googlebot indexing one page, not the other in the same conditions? In html sitemap, for example. We have 6 new pages with unique content. Googlebot immediately indexes only 2 pages, and then after sometime the remaining 4 pages. On what parameters the crawler decides to scan or not scan this page?
Technical SEO | | ATCnik0 -
What is the best method for indexing blog pages?
I have a client whose blog has hundreds if not thousands of entries. My question is does it help his site if each unique blog entry becomes indexed on Google? Can we do this dynamically? And role does the canonical tag play in blog entries if at all? Thanks, Chris
Technical SEO | | coxen000