What Sources to use to compile an as comprehensive list of pages indexed in Google?
-
As part of a Panda recovery initiative we are trying to get an as comprehensive list of currently URLs indexed by Google as possible.
Using the site:domain.com operator Google displays that approximately 21k pages are indexed. Scraping the results however ends after the listing of 240 links.
Are there any other sources we could be using to make the list more comprehensive? To be clear, we are not looking for external crawlers like the SEOmoz crawl tool but sources that would be confidently allow us to determine a list of URLs currently hold in the Google index.
Thank you /Thomas
-
We don't usually take private info in public questions, but if you want to, Private Message me the domain (via my profile). I'm really curious about (1) and I'd love to take a peek.
-
Thanks Pete,
As always very much appreciate your input.
1/ We aren't using any parameters and when using the filter=0 we are getting the same results. For my just done test I was only able to pull 350 pages out of 18.5k pages using the web interface. If anyone has any other thoughts on this please let me now.
2/ That is a great idea. Most of our pages live in the root directory to keep the URL slugs short so unfortunately this one will not help us.
3/ Another good idea. I understand this approach is helpful to see your coverage of wanted pages in the Google index but won't be able to help you determine superfluous pages currently in the Google index unless I misunderstood you?
4/ We are using ScreamingFrog and I agree its a fantastic tool. The index size with ScreamingFrog is showing not more than 300 pages which is our final goal.
Overall we are seeing continuous yet small drops to the index size using our approach of returning 410 response codes for unwanted pages and dedicated sitemaps to speed up delisting. See http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware
We are just trying to get a more complete list of whats currently in the index to speed up delisting.
Thank you for your reference to the Panda post I remember reading it before and will give it another go right now.
One final question, in your experience dealing with Panda penalties, have you seen scenarios where it seems the delisting/penalizing of a site has only happened for a particular CCTLD of google or just the homepage? See http://www.seomoz.org/q/panda-penguin-penalty-not-global-but-only-firea-for-specific-google-cctlds It is what we are currently experiencing and trying to see if other people have observed something similar.
Best /Thomas
-
If you're willing to piece together multiple sources, I can definitely give you some starting points:
(1) First, dropping from 21K pages indexed in Google to 240 definitely seems odd. Are you hitting omitted results? You may have to shut off filtering in the URL (&filter=0).
(2) You can also divide the site up logically and run "site:" on sub-folders, parameters, etc. Say, for example:
site:example.com/blog
site:example.com/shop
site:example.com/uk
As long as there's some logical structure, you can use it to break the index request down into smaller chunks. Don't forget to use inurl: for URL parameters (filters, pagination, etc.).
(3) This takes a while, but split up your XML sitemaps into logical clusters - say, one for major pages, one for top-level topics/categories, one for sub-categories, one for products. That way, you'll get a cleaner could of what kind of pages are indexed, and you'll know where your gaps are.
(4) Run a desktop crawler on the site, like Xenu or Screaming Frog (Xenu is free, but PC only and harder to use. Screaming Frog has a yearly fee, but it's an excellent tool). This won't necessarily tell you what Google has indexed, but it will help you see how your site is being crawled and where problems are occurring.
I wrote a mega-post a while back on all the different kinds of duplicate content. Sometimes, just seeing examples can help you catch a problem you might be having. It's at:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
Does anyone have any insight on this? If the answer is simply there is no better approach than look at the limited data available through the Google UI this would be helpful as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Weird Indexing Issues with the Pages and Rankings
When I found the my page was non-existent on the search results page, I requested Google to index my page via the Search Console. And then just a few minutes after I did that, that page rose to top 3 ranking on the search page (with the same keyword and browser search). It happens to most of the pages on my website. Maybe a week later the rankings sank again, and I had to do the process again to make my pages to the top. Any reasons to explain this phenomenon, and how I can fix this issue? Thank you in advance.
Intermediate & Advanced SEO | | mrmrsteven0 -
Google Indexing of Images
Our site is experiencing an issue with indexation of images. The site is real estate oriented. It has 238 listings with about 1190 images. The site submits two version (different sizes) of each image to Google, so there are about 2,400 images. Only several hundred are indexed. Can adding Microdata improve the indexation of the images? Our site map is submitting images that are on no-index listing pages to Google. As a result more than 2000 images have been submitted but only a few hundred have been indexed. How should the site map deal with images that reside on no-index pages? Do images that are part of pages that are set up as "no-index" need a special "no-index" label or special treatment? My concern is that so many images that not indexed could be a red flag showing poor quality content to Google. Is it worth investing in correcting this issue, or will correcting it result in little to no improvement in SEO? Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Delete page and tell it to Google
Hello everybody, i have a problem with some pages of my website. I have had to removed 5-10 pages because these pages linked to 404 pages and i removed it. Need i to tell to Google or Only removed? Thanks so much
Intermediate & Advanced SEO | | pompero990 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
JavaScript Issue? Google not indexing a microsite
We have a microsite that was created on our domain but is not linked to from ANYwhere EXCEPT within some Javascript elements on pages on our site. The link is in one JQuery slide panel. The microsite is not being indexed at all - when i do site:(microsite name) on Google, it doesn't return anything. I think it's because the link's only in a Java element, but my client assures me that if I submit to Google for crawling the problem will be solved. Maybe so, but my point is that if you just create a simple HTML link from at least one of our site pages, it will get indexed no problem. The microsite has been up for months and it's still not being indexed - another newer microsite that's been up for a few weeks and has simple links to it from our pages is indexing fine. I have submitted the URL for crawling but had to use the google.com/webmasters/tools/submit-url/ method as I don't have access to the top level domain WMT account. p.s. when we put the microsite URL into the SEOBook spider-test tool it returns lots of lovely information - but that just tells me the page is findable, does exist, right? That doesn't mean Google's going to necessarily index it, as I am surmising...Moz hasn't found in the 5 months the microsite has been up and running. What's going on here?
Intermediate & Advanced SEO | | Jen_Floyd0 -
How to make Google include our recipe pages in its main index?
We have developed a recipe search engine www.edamam.com and serve the content of over 500+ food bloggers and major recipe websites. Our legal obligations do not allow us to show the actual recipe preparation info (e.g. the most valuable from the content), we can only show a few images, the ingredients and nutrition information. Most of the unique content goes to the source/blog. By submitting XML sitemaps on GWT we now have around 500K pages indexed, however only a few hundred appear in Google's main index and we are looking for a solution to include all of them in the index. Also good to know is that it appears that all our top competitors are in the exactly same situation, so it is a challenging question. Any ideas will be highly appreciated! Thanks, Lily
Intermediate & Advanced SEO | | edamam0 -
Indexing non-indexed content and Google crawlers
On a news website we have a system where articles are given a publish date which is often in the future. The articles were showing up in Google before the publish date despite us not being able to find them linked from anywhere on the website. I've added a 'noindex' meta tag to articles that shouldn't be live until a future date. When the date comes for them to appear on the website, the noindex disappears. Is anyone aware of any issues doing this - say Google crawls a page that is noindex, then 2 hours later it finds out it should now be indexed? Should it still appear in Google search, News etc. as normal, as a new page? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0 -
How to have pages re-indexed
Hi, my hosting company has blocked one my web site seeing it has performance problem. Result of that, it is now reactivated but my pages had to be reindexed. I have added my web site to Google Webmaster tool and I have submitted my site map. After few days it is saying: 103 number of URLs provided 39 URLs indexed I know Google doesn't promesse to index every page but do you know any way to increase my chance to get all my pages indexed? By the way, that site include pages and post (blog). Thanks for your help ! Nancy
Intermediate & Advanced SEO | | EnigmaSolution0