Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • SEO Q&A
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • Case Studies
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • SEO Q&A

        Insights & discussions from an SEO community of 500,000+.

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • Case Studies

        Explore how Moz drives ROI with a proven track record of success.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. How can I make a list of all URLs indexed by Google?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    How can I make a list of all URLs indexed by Google?

    Intermediate & Advanced SEO
    5
    10
    56366
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • Bryggselv.no
      Bryggselv.no last edited by

      I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap.

      The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google.

      Anyone?

      (I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)

      1 Reply Last reply Reply Quote 0
      • Worship_Digital
        Worship_Digital last edited by

        If you can get a developer to create a list of all the pages Google has crawled within a date range then you can use this python script to check if the page is indexed or not.

        http://searchengineland.com/check-urls-indexed-google-using-python-259773

        The script uses the info: search feature to check the urls.

        You will have to install Python, Tor and Polipo for this to work. It is quite technical so if you aren't a technical person you may need help.

        Depending on how many URL's you have and how long you decide to wait before checking each URL, it can take a few hours.

        1 Reply Last reply Reply Quote 0
        • Bryggselv.no
          Bryggselv.no last edited by

          Thanks for your input guys! I've almost landed on the following approach:

          1. Use this  http://www.chrisains.com/seo-tools/extract-urls-from-web-serps/ to collect a number (3-600) of URLs based on the various problem URL-footprints.
          2. Make XML "problem sitemaps" based on above URLs
          3. Implement 301s
          4. Ping the search engines with the XML "problem sitemaps", so that these may discover changes and see what the site really looks like (ideally reducing the # of indexed pages by about 85%)
          5. Track SE traffic as well as index for each URL footprint once a week for 6-8 weeks and follow progress
          6. If progress is not satisfactory, then go the URL Profiler route.

          Any thoughts before I go ahead?

          1 Reply Last reply Reply Quote 2
          • TammyWood
            TammyWood last edited by

            URL profiler will do this, as well as the other recommend scraper sites.

            1 Reply Last reply Reply Quote 0
            • vcj
              vcj last edited by

              URL Profiler might be worth checking out:

              http://urlprofiler.com/

              It does require that you use a proxy, since Google does not like you scraping their search results.

              1 Reply Last reply Reply Quote 1
              • GastonRiera
                Gaston Riera @Bryggselv.no last edited by

                Im sorry to confirm you that google does not want to everyine know that they have in their index. We as SEOs complain about that.

                Its hard to belive that you couldnt get all your pages with a scraper. (because it just searches and gets the SERPS)

                1 Reply Last reply Reply Quote 0
                • Bryggselv.no
                  Bryggselv.no @GastonRiera last edited by

                  I tried thiss and a few others http://www.chrisains.com/seo-tools/extract-urls-from-web-serps/. This gave me about 500-1000 URLs at a time, but included a lot of cut and paste back and forth.

                  I imagine there must be a much easier way of doing this...

                  GastonRiera 1 Reply Last reply Reply Quote 0
                  • GastonRiera
                    Gaston Riera @Bryggselv.no last edited by

                    Well, There are some scrapers that might do that job.

                    To do it the right way you will need proxies and a scraper.
                    My recommendation is Gscraper or Scrapebox and a list of (at list) 10 proxies.

                    Then, just make a scrape whit the "site:mydomain.com" and see what you get.
                    (before buying proxies or any scraper, check if you get something like you want with the free stuff)

                    Bryggselv.no 1 Reply Last reply Reply Quote 0
                    • Bryggselv.no
                      Bryggselv.no last edited by

                      I used Screaming to discover the spider trap (and more), but as far as I know, I cannot use Screaming to import all URLs that Google actually has in its index (or can I?).

                      A list of URLs actually in Googles index is what I'm after 🙂

                      GastonRiera 1 Reply Last reply Reply Quote 0
                      • GastonRiera
                        Gaston Riera last edited by

                        Hi Sverre,

                        Have you tried Screaming Frog SEO Spider? Here a link to it: https://www.screamingfrog.co.uk/seo-spider/

                        It's really helpfull to crawl all the pages you have as accesible for spiders. You might need the premium version to crawl over 500 pages.

                        Also, have you checked for the common duplicate pages issues? Here a Moz tutorial: https://moz.com/learn/seo/duplicate-content

                        Hope it helps.
                        GR.

                        1 Reply Last reply Reply Quote 0
                        • 1 / 1
                        • First post
                          Last post

                        Got a burning SEO question?

                        Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                        Start my free trial


                        Browse Questions

                        Explore more categories

                        • Moz Tools

                          Chat with the community about the Moz tools.

                        • SEO Tactics

                          Discuss the SEO process with fellow marketers

                        • Community

                          Discuss industry events, jobs, and news!

                        • Digital Marketing

                          Chat about tactics outside of SEO

                        • Research & Trends

                          Dive into research and trends in the search industry.

                        • Support

                          Connect on product support and feature requests.

                        • See all categories

                        Related Questions

                        • Mat_C

                          How do internal search results get indexed by Google?

                          Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.

                          Intermediate & Advanced SEO | | Mat_C
                          0
                        • Charles_Murdock

                          Can I tell Google to Ignore Parts of a Page?

                          Hi all, I was wondering if there was some sort of html trick that I could use to selectively tell a search engine to ignore texts on certain parts of a page. Thanks!
                          Charles

                          Intermediate & Advanced SEO | | Charles_Murdock
                          0
                        • Chemometec

                          Do I need to re-index the page after editing URL?

                          Hi, I had to edit some of the URLs. But, google is still showing my old URL in search results for certain keywords, which ofc get 404. By crawling with ScremingFrog it gets me 301 'page not found' and still giving old URLs. Why is that? And do I need to re-index pages with new URLs? Is 'fetch as Google' enough to do that or any other advice? Thanks a lot, hope the topic will help to someone else too. Dusan

                          Intermediate & Advanced SEO | | Chemometec
                          0
                        • iam-sold

                          How can I prevent duplicate pages being indexed because of load balancer (hosting)?

                          The site that I am optimising has a problem with duplicate pages being indexed as a result of the load balancer (which is required and set up by the hosting company). The load balancer passes the site through to 2 different URLs: www.domain.com www2.domain.com Some how, Google have indexed 2 of the same URLs (which I was obviously hoping they wouldn't) - the first on www and the second on www2. The hosting is a mirror image of each other (www and www2), meaning I can't upload a robots.txt to the root of www2.domain.com disallowing all. Also, I can't add a canonical script into the website header of www2.domain.com pointing the individual URLs through to www.domain.com etc. Any suggestions as to how I can resolve this issue would be greatly appreciated!

                          Intermediate & Advanced SEO | | iam-sold
                          0
                        • BeytzNet

                          What makes a site appear in Google Alerts? And does it mean anything?

                          Hi All, I recently started using Google Alerts more and more and while sites I support never appear there (not surprising) I recently noticed few very poor and low quality sites that do. This site for example appears quite a bit in its niche. So to my questions... What makes a site appear in Google Alerts? And does it mean anything? Thanks

                          Intermediate & Advanced SEO | | BeytzNet
                          0
                        • 94501

                          How can I get a list of every url of a site in Google's index?

                          I work on a site that has almost 20,000 urls in its site map. Google WMT claims 28,000 indexed and a search on Google shows 33,000. I'd like to find what the difference is. Is there a way to get an excel sheet with every url Google has indexed for a site? Thanks... Mike

                          Intermediate & Advanced SEO | | 94501
                          0
                        • danatanseo

                          How is Google crawling and indexing this directory listing?

                          We have three Directory Listing pages that are being indexed by Google: http://www.ccisolutions.com/StoreFront/jsp/ http://www.ccisolutions.com/StoreFront/jsp/html/ http://www.ccisolutions.com/StoreFront/jsp/pdf/ How and why is Googlebot crawling and indexing these pages? Nothing else links to them (although the /jsp.html/ and /jsp/pdf/ both link back to /jsp/). They aren't disallowed in our robots.txt file and I understand that this could be why. If we add them to our robots.txt file and disallow, will this prevent Googlebot from crawling and indexing those Directory Listing pages without prohibiting them from crawling and indexing the content that resides there which is used to populate pages on our site? Having these pages indexed in Google is causing a myriad of issues, not the least of which is duplicate content. For example, this file <tt>CCI-SALES-STAFF.HTML</tt> (which appears on this Directory Listing referenced above - http://www.ccisolutions.com/StoreFront/jsp/html/) clicks through to this Web page: http://www.ccisolutions.com/StoreFront/jsp/html/CCI-SALES-STAFF.HTML This page is indexed in Google and we don't want it to be. But so is the actual page where we intended the content contained in that file to display: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff As you can see, this results in duplicate content problems. Is there a way to disallow Googlebot from crawling that Directory Listing page, and, provided that we have this URL in our sitemap: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff, solve the duplicate content issue as a result? For example: Disallow: /StoreFront/jsp/ Disallow: /StoreFront/jsp/html/ Disallow: /StoreFront/jsp/pdf/ Can we do this without risking blocking Googlebot from content we do want crawled and indexed? Many thanks in advance for any and all help on this one!

                          Intermediate & Advanced SEO | | danatanseo
                          0
                        • nicole.healthline

                          Best way to permanently remove URLs from the Google index?

                          We have several subdomains we use for testing applications. Even if we block with robots.txt, these subdomains still appear to get indexed (though they show as blocked by robots.txt. I've claimed these subdomains and requested permanent removal, but it appears that after a certain time period (6 months)? Google will re-index (and mark them as blocked by robots.txt). What is the best way to permanently remove these from the index? We can't use login to block because our clients want to be able to view these applications without needing to login. What is the next best solution?

                          Intermediate & Advanced SEO | | nicole.healthline
                          0

                        Get started with Moz Pro!

                        Unlock the power of advanced SEO tools and data-driven insights.

                        Start my free trial
                        Products
                        • Moz Pro
                        • Moz Local
                        • Moz API
                        • Moz Data
                        • STAT
                        • Product Updates
                        Moz Solutions
                        • SMB Solutions
                        • Agency Solutions
                        • Enterprise Solutions
                        Free SEO Tools
                        • Domain Authority Checker
                        • Link Explorer
                        • Keyword Explorer
                        • Competitive Research
                        • Brand Authority Checker
                        • Local Citation Checker
                        • MozBar Extension
                        • MozCast
                        Resources
                        • Blog
                        • SEO Learning Center
                        • Help Hub
                        • Beginner's Guide to SEO
                        • How-to Guides
                        • Moz Academy
                        • API Docs
                        About Moz
                        • About
                        • Team
                        • Careers
                        • Contact
                        Why Moz
                        • Case Studies
                        • Testimonials
                        Get Involved
                        • Become an Affiliate
                        • MozCon
                        • Webinars
                        • Practical Marketer Series
                        • MozPod
                        Connect with us

                        Contact the Help team

                        Join our newsletter
                        Moz logo
                        © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                        • Accessibility
                        • Terms of Use
                        • Privacy

                        Looks like your connection to Moz was lost, please wait while we try to reconnect.