Google Search Results...
-
I'm trying to download every google search results for my company site:company.com. The limit I can get is 100. I tried using seoquake but I can only get to 100.
The reason for this? I would like to see what are the pages indexed. www pages, and subdomain pages should only make up 7,000 but search results are 23,000. I would like to see what the others are in the 23,000.
Any advice how to go about this? I can individually check subdomains site:www.company.com and site:static.company.com, but I don't know all the subdomains.
Anyone cracked this? I tried using a scrapper tool but it was only able to retrieve 200.
-
I see. If you have some idea of what section of your site might be in there that you don't want, you can use site:company.com inurl:whatever to narrow it down. You should know the file or call for search and shop pages and can put that name after the inurl modifier.
-
The goal is to identify what pages are Google indexing and are there ones it shouldn't. (We don't index search pages, we don't index basket or checkout pages)
I do know know all of the subdomains and searching them individually isn't making up the total search count when I do site:company.com.
I don't have duplicate pages from my moz reports so it can't be that. If I was able to download a full google search result into a spreadsheet. I could quickly filter and see what pages are being indexed that shouldn't.
-
Ok, but what's your goal with this? And why don't you know your own subdomains that you've created? It seems like you could work backwards from a better starting point by applying those things.
-
My GA is only focused on a single domain, as subdomains hold just PDFs, images etc. Traffic reports from GA are focused on www.company.com pages.
The only way I can know exactly which URLS have been indexed, seems to be going through the google search results, but it caps after 7 pages
-
Hi Cyto. Why don't you try exporting pages receiving google/organic visits from Google Analytics using the Landing Page metric as a secondary dimension... It won't be all inclusive, but it will give you a good idea on what pages are indexed and drawing in visitors. You can then compare that data against your sitemaps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
What is Google supposed to return when you submit an image URL into Fetch as Google? Is a few lines of readable text followed by lots of unreadable text normal?
I am seeing something like this (Is this normal?): HTTP/1.1 200 OK
Intermediate & Advanced SEO | | Autoboof
Server: nginx
Content-Type: image/jpeg
X-Content-Type-Options: nosniff
Last-Modified: Fri, 13 Nov 2015 15:23:04 GMT
Cache-Control: max-age=1209600
Expires: Fri, 27 Nov 2015 15:23:55 GMT
X-Request-ID: v-8dd8519e-8a1a-11e5-a595-12313d18b975
X-AH-Environment: prod
Content-Length: 25505
Accept-Ranges: bytes
Date: Fri, 13 Nov 2015 15:24:11 GMT
X-Varnish: 863978362 863966195
Age: 16
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
X-Cache-Hits: 1 ����•JFIF••••��;CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 75
��C•••••••••• •
••
••••••••• $.' ",#(7),01444'9=82<.342��C• ••••
•2!!22222222222222222222222222222222222222222222222222��•••••v••"••••••��••••••••••••••••
•���•••••••••••••}•••••••!1A••Qa•"q•2���•#B��•R��$3br�
••••%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������•••••••••••••••••••
•���••••••••••••••w••••••!1••AQ•aq•"2�••B���� #3R�•br�0 -
Google and PDF indexing
It was recently brought to my attention that one of the PDFs on our site wasn't showing up when looking for a particular phrase within the document. The user was trying to search only within our site. Once I removed the site restriction - I noticed that there was another site using the exact same PDF. It appears Google is indexing that PDF but not ours. The name, title, and content are the same. Is there any way to get around this? I find it interesting as we use GSA and within GSA it shows up for the phrase. I have to imagine Google is saying that it already has the PDF and therefore is ignoring our PDF. Any tricks to get around this? BTW - both sites rightfully should have the PDF. One is a client site and they are allowed to host the PDFs created for them. However, I'd like Mathematica to also be listed. Query: no site restriction (notice: Teach for america comes up #1 and Mathematica is not listed). https://www.google.com/search?as_q=&as_epq=HSAC_final_rpt_9_2013.pdf&as_oq=&as_eq=&as_nlo=&as_nhi=&lr=&cr=&as_qdr=all&as_sitesearch=&as_occt=any&safe=images&tbs=&as_filetype=pdf&as_rights=&gws_rd=ssl#q=HSAC_final_rpt_9_2013.pdf+"Teach+charlotte"+filetype:pdf&as_qdr=all&filter=0 Query: site restriction (notice that it doesn't find the phrase and redirects to any of the words) https://www.google.com/search?as_q=&as_epq=HSAC_final_rpt_9_2013.pdf&as_oq=&as_eq=&as_nlo=&as_nhi=&lr=&cr=&as_qdr=all&as_sitesearch=&as_occt=any&safe=images&tbs=&as_filetype=pdf&as_rights=&gws_rd=ssl#as_qdr=all&q="Teach+charlotte"+site:www.mathematica-mpr.com+filetype:pdf
Intermediate & Advanced SEO | | jpfleiderer0 -
Keywords Ranking Varies When Search changes Location/City (Not Google Places)
We have a client that are ranking well on most Australian cities for competitive keywords except Google Sydney. If you toggled the cities on the search field when you search for a keyword, their places are almost exactly the same except for Sydney on which they can't be found at all in the Top 100 results. The keywords are not city specific, they are general commonly searched keywords about health. This is not a Google Places issue. The search result shows the right landing pages of the site for their respective keywords. Any ideas or experience on this kind of situation. Much appreciated Louie
Intermediate & Advanced SEO | | louieramos0 -
Google snippet chosen why?
We have a page about buying property in the Megeve area of the Alps in France. We are No.2 on Google.co.uk for the term "megeve property for sale" and No.1 for "megeve property". http://www.prestigeproperty.co.uk/MegeveProperty/Properties.asp If you search for "megeve property for sale", Google serves our META description as the snippet: Ski chalets, homes and apartments for sale in this exclusive, prestigious Rhone Alpes village - 520000-16500000 EUR. However, we noticed that searching for just "megeve property" serves up a much better snippet taken from the text on the page: A crucial factor for potential property buyers is that there is a strong rental market in Megève and this remains high all year around with properties close to the ... Does anyone know why Google would serve this particular snippet instead of the META description. Is it the number of strong and descriptive words used, or some other reason?
Intermediate & Advanced SEO | | PPGUKLTD0 -
Google + Local Pages
Hi, If I have a company with multipul addresses, Do I create separate Google + page for each area?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
Keyword search filter in Google Adwords: broad? exact? phrase?
Hello all I am working in my website and analysing the potential best keywords for the SEO (post/page name and url path name). 1. I am using Google Adwords. Any other tool you would recommend? 2. Which selection should I make in the Google Adwords Keyword Tool in order to know the monthly global searches of the keywords I should target? Exact? Phrase? Broad? For instance, KEYWORD SEARCH:"Information about Madrid" BROAD MATCH: 300,000 EXACT MATCH: 1,500 Te potential of the keyword is 300,000? 300,000 searches are undertaken on a month that contains that sentence and its variations? Or the relevant keyword potential is the exacta match traffic? Thank you very much! Antonio
Intermediate & Advanced SEO | | aalcocer20030 -
Do search engines only count links that have google analytics?
I am reading a thread right now and I came across this statement: Search engines can view clicks only if websites have Google analytics or some toolbar installed. Obviously that's not the case with over 50% of the websites. That's why I don't agree with your comment. True or False?
Intermediate & Advanced SEO | | SEODinosaur0