How can I make a list of all URLs indexed by Google?
-
I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap.
The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google.
Anyone?
(I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)
-
If you can get a developer to create a list of all the pages Google has crawled within a date range then you can use this python script to check if the page is indexed or not.
http://searchengineland.com/check-urls-indexed-google-using-python-259773
The script uses the info: search feature to check the urls.
You will have to install Python, Tor and Polipo for this to work. It is quite technical so if you aren't a technical person you may need help.
Depending on how many URL's you have and how long you decide to wait before checking each URL, it can take a few hours.
-
Thanks for your input guys! I've almost landed on the following approach:
- Use this http://www.chrisains.com/seo-tools/extract-urls-from-web-serps/ to collect a number (3-600) of URLs based on the various problem URL-footprints.
- Make XML "problem sitemaps" based on above URLs
- Implement 301s
- Ping the search engines with the XML "problem sitemaps", so that these may discover changes and see what the site really looks like (ideally reducing the # of indexed pages by about 85%)
- Track SE traffic as well as index for each URL footprint once a week for 6-8 weeks and follow progress
- If progress is not satisfactory, then go the URL Profiler route.
Any thoughts before I go ahead?
-
URL profiler will do this, as well as the other recommend scraper sites.
-
URL Profiler might be worth checking out:
It does require that you use a proxy, since Google does not like you scraping their search results.
-
Im sorry to confirm you that google does not want to everyine know that they have in their index. We as SEOs complain about that.
Its hard to belive that you couldnt get all your pages with a scraper. (because it just searches and gets the SERPS)
-
I tried thiss and a few others http://www.chrisains.com/seo-tools/extract-urls-from-web-serps/. This gave me about 500-1000 URLs at a time, but included a lot of cut and paste back and forth.
I imagine there must be a much easier way of doing this...
-
Well, There are some scrapers that might do that job.
To do it the right way you will need proxies and a scraper.
My recommendation is Gscraper or Scrapebox and a list of (at list) 10 proxies.Then, just make a scrape whit the "site:mydomain.com" and see what you get.
(before buying proxies or any scraper, check if you get something like you want with the free stuff) -
I used Screaming to discover the spider trap (and more), but as far as I know, I cannot use Screaming to import all URLs that Google actually has in its index (or can I?).
A list of URLs actually in Googles index is what I'm after
-
Hi Sverre,
Have you tried Screaming Frog SEO Spider? Here a link to it: https://www.screamingfrog.co.uk/seo-spider/
It's really helpfull to crawl all the pages you have as accesible for spiders. You might need the premium version to crawl over 500 pages.
Also, have you checked for the common duplicate pages issues? Here a Moz tutorial: https://moz.com/learn/seo/duplicate-content
Hope it helps.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has discovered a URL but won't index it?
Hey all, have a really strange situation I've never encountered before. I launched a new website about 2 months ago. It took an awfully long time to get index, probably 3 weeks. When it did, only the homepage was indexed. I completed the site, all it's pages, made and submitted a sitemap...all about a month ago. The coverage report shows that Google has discovered the URL's but not indexed them. Weirdly, 3 of the pages ARE indexed, but the rest are not. So I have 42 URL's in the coverage report listed as "Excluded" and 39 say "Discovered- currently not indexed." When I inspect any of these URL's, it says "this page is not in the index, but not because of an error." They are listed as crawled - currently not indexed or discovered - currently not indexed. But 3 of them are, and I updated those pages, and now those changes are reflected in Google's index. I have no idea how those 3 made it in while others didn't, or why the crawler came back and indexed the changes but continues to leave the others out. Has anyone seen this before and know what to do?
Intermediate & Advanced SEO | | DanDeceuster0 -
Google Places Tools To Find Duplicate Listings?
Hey guys, Any recommended tools - paid or processes to find duplicate listings for Google Places? Cheers.
Intermediate & Advanced SEO | | wickstar0 -
Lately I have noticed Google indexing many files on the site without the .html extension
Hello, Our site, while we convert, remains in HTML 4.0. Fle names such as http://www.sample.com/samples/index.shtml are being picked up in the SERPS as http://www.sample.com/samples/ even when I use the "rel="canonical" tag and specify the full file name therein as recommended. The link to the truncated URL (http://www.sample.com/samples/) results in what MOZ shows as fewer incoming links than the full file name is shown as having incoming. I am not sure if this is causing a loss in placement (the MOZ stats are showing a decline of late), which I have seen recently (of course, I am aware of other possible reasons, such as not being in HTML5 yet). Any help with this would be great. Thank you in advance
Intermediate & Advanced SEO | | gheh20130 -
URL Errors for SmartPhone in Google Search Console/Webmaster Tools
Howdy all, In recent weeks I have seen a steady increase in the number of smartphone related url errors on Googles Search Console (formerly webmaster tools). THe crawler appears to be searching for a /m/ or /mobile/ directory within the URLs. Why is it doing this? Any insight would be greatly appreciated. Unfortunately this is for an unresponsive site, would setting the viewport help stop the issue for know until my new responsive site is launched shortly. Cheers fello Mozzers 🙂 Tim NDh1RNs
Intermediate & Advanced SEO | | TimHolmes1 -
Website penalised can't find where the problem is. Google went INSANE
Hello, I desperately need a hand here! Firstly I just want to say that I we never infracted google guidelines as far as we know. I have been around in this field for about 6 years and have had success with many websites on the way relying only in natural SEO and was never penalised until now. The problem is that our website www.turbosconto.it is and we have no idea why. (not manual) The web has been online for more than 6 months and it NEVER started to rank. it has about 2 organic visits a day at max. In this time we got several links from good websites which are related to our topic which actually keep sending us about 50 visits a day. Nevertheless our organic visita are still 1 or 2 a day. All the pages seem to be heavily penalised ... when you perform a search for any of our "shops"even including our Url, no entries for the domain appear. A search example: http://www.turbosconto.it zalando What I will expect to find as a result: http://www.turbosconto.it/buono-sconto-zalando The same case repeats for all of the pages for the "shops" we promote. Searching any of the brads + our domain shows no result except from "nike" and "euroclinix" (I see no relationship between these 2) Some days before for these same type of searches it was showing pages from the domain which we blocked by robots months ago, and which go to 404 error instead of our optimised landing pages which cannot be found in the first 50 results. These pages are generated by our rating system... We already send requests to de index all theses pages but they keep appearing for every new page that we create. And the real pages nowhere to be found... Here isan example: http://www.turbosconto.it/shops/codice-promozionale-pimkie/rat
Intermediate & Advanced SEO | | sebastiankoch
You can see how google indexes that for as in this search: site:www.turbosconto.it rate Why on earth will google show a page which is blocked by the robots.txt displaying that the content cannot retrieved because it is blocked by the robots instead of showing pages which are totally SEO Friendly and content rich... All the script from TurboSconto is the same one that we use in our spanish version www.turbocupones.com. With this last one we have awesome results, so it makes things even more weird... Ok apart from those weird issues with the indexation and the robots, why did a research on out backlinks and we where surprised to fin a few bad links that we never asked for. Never the less there are just a few and we have many HIGH QUALITY LINKS, which makes it hard to believe that this could be the reason. Just to be sure we, we used the disavow tool for these links, here are the bad links we submitted 2 days ago: domain: www.drilldown.it #we did not ask for this domain: www.indicizza.net #we did not ask for this domain: urlbook.in #we did not ask for this, moreover is a spammy one http://inpe.br.way2seo.org/domain-list-878 #we did not ask for this, moreover is a spammy one http://shady.nu.gomarathi.com/domain-list-789 #we did not ask for this, moreover is a spammy one http://www.clicdopoclic.it/2013/12/i-migliori-siti-italiani-di-coupon-e.html #we did not ask for this, moreover and is a copy of a post of an other blog http://typo.domain.bi/turbosconto.it I have no clue what can it be, we have no warning messages in the webmaster tools or anything.
For me it looks as if google has a BUG and went crazy on judging our italian website. Or perhaps we are just missing something ??? If anyone could throw some light on this I will be really glad and willing to pay some compensation for the help provided. THANKS A LOT!0 -
Google Is Indexing The Wrong Page For My Keyword
For a long time (almost 3 mounth) google indexing the wrong page for my main keyword.
Intermediate & Advanced SEO | | Tiedemann_Anselm
The problem is that each time google indexed another page each time for a period of 4-7 days, Sometimes i see the home page, sometimes a category page and sometimes a product page.
It seems though Google has not yet decided what his favorite / better page for this keyword. This is the pages google index: (In most cases you can find the site on the second or third page) Main Page: http://bit.ly/19fOqDh Category Page: http://bit.ly/1ebpiRn Another Category: http://bit.ly/K3MZl4 Product Page: http://bit.ly/1c73B1s All links I get to the website are natural links, therefore in most cases the anchor we got is the website name. In addition I have many links I get from bloggers that asked to do a review on one of my products, I'm very careful about that and so I'm always checking the blogger and their website only if it is something good, I allowed it. also i never ask for a link back (must of the time i receive without asking), and as I said, most of their links are anchor with my website name. Here some example of links that i received from bloggers: http://bit.ly/1hF0pQb http://bit.ly/1a8ogT1 http://bit.ly/1bqqRr8 http://bit.ly/1c5QeC7 http://bit.ly/1gXgzXJ Please Can I get a recommendation what should you do?
Should I try to change the anchor of the link?
Do I need to not allow bloggers to make a review on my products? I'd love to hear what you recommend,
Thanks for the help0 -
Previously ranking #1 in google, web page has 301 / url rewrite, indexed but now showing for keyword search?
Two web pages on my website, previously ranked well in google, consistent top 3 places for 6months+, but when the site was modified, these two pages previously ending .php had the page names changed to the keyword to further improve (or so I thought). Since then the page doesn't rank at all for that search term in google. I used google webmaster tools to remove the previous page from Cache and search results, re submitted a sitemap, and where possible fixed links to the new page from other sites. On previous advice to fix I purchased links, web directories, social and articles etc to the new page but so far nothing... Its been almost 5 months and its very frustrating as these two pages previously ranked well and as a landing page ended in conversions. This problem is only appearing in google. The pages still rank well in Bing and Yahoo. Google has got the page indexed if I do a search by the url, but the page never shows under any search term it should, despite being heavily optimised for certain terms. I've spoke to my developers and they are stumped also, they've now added this text to the effected page(s) to see if this helps. Header("HTTP/1.1 301 Moved Permanently");
Intermediate & Advanced SEO | | seanclc
$newurl=SITE_URL.$seo;
Header("Location:$newurl"); Can Google still index a web page but refuse to show it in search results? All other pages on my site rank well, just these two that were once called something different has caused issues? Any advice? Any ideas, Have I missed something? Im at a loss...0 -
400 errors and URL parameters in Google Webmaster Tools
On our website we do a lot of dynamic resizing of images by using a script which automatically re-sizes an image dependant on paramaters in the URL like: www.mysite.com/images/1234.jpg?width=100&height=200&cut=false In webmaster tools I have noticed there are a lot of 400 errors on these image Also when I click the URL's listed as causing the errors the URL's are URL Encoded and go to pages like this (this give a bad request): www.mysite.com/images/1234.jpg?%3Fwidth%3D100%26height%3D200%26cut%3Dfalse What are your thoughts on what I should do to stop this? I notice in my webmaster tools "URL Parameters" there are parameters for:
Intermediate & Advanced SEO | | James77
height
width
cut which must be from the Image URLs. These are currently set to "Let Google Decide", but should I change them manually to "Doesn't effect page content"? Thanks in advance0