About Google Spider
-
Hello, people!
I have some questions regarding on Google spider.
Many people are saying that "Google spiders only have US IP address."
Is this really true?
But I also saw video from Google's offical blog and it said "Google spider come from all around the world."
At this point I am really confused.
Q1) I researched and it seems like Google spiders have only US IP address. THen what does exactly mean by "Google spider come from all around the world."?
Q2) If Google spider have only US IP address, what happen to site which use IP delivery?
Is this means that Google spider always redirect to us site since they only have US IP?
Can anyone help me to understand??
One more questions! When Google analyzing for cloaking issue, do you think Google analyze when spider crawls the site or after they crawled the site?
-
I think some of the confusion may be due to Google's primarily using IP addresses assigned to their headquarters in Mountain View, California. Google has many (around 20) data centers located outside the US. I recall reading an article whereby at times they used their Mountain View IPs from centers around the world. For security reasons they do not wish the location of all their data centers to be known.
I researched this topic before and I was unable to locate any official information from Google. It would only seem reasonable they crawl from all over the world. If they didn't, then a lot of sites which use geo-based targeting for site navigation would not have most of their content indexed. While it's true a sitemap could be used to overcome the issue, many sites do not use sitemaps and they still get indexed.
-
I do not believe this is true, Google has data centers all over the world in which they crawl from.
Google does not only have spiders crawling from US data centers.
I also have the feeling crawls are based on many factors such as link diversity per region, TLD of domain per region, PR user (still a crawling factor imo) and many more factors.
Overall do not stress Google can crawl from various regions all over the globe, I would be more worried about geo server location and TLD of your domain and also local links.
Kind Regards,
James Norquay.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fake Links indexing in google
Hello everyone, I have an interesting situation occurring here, and hoping maybe someone here has seen something of this nature or be able to offer some sort of advice. So, we recently installed a wordpress to a subdomain for our business and have been blogging through it. We added the google webmaster tools meta tag and I've noticed an increase in 404 links. I brought this up to or server admin, and he verified that there were a lot of ip's pinging our server looking for these links that don't exist. We've combed through our server files and nothing seems to be compromised. Today, we noticed that when you do site:ourdomain.com into google the subdomain with wordpress shows hundreds of these fake links, that when you visit them, return a 404 page. Just curious if anyone has seen anything like this, what it may be, how we can stop it, could it negatively impact us in anyway? Should we even worry about it? Here's the link to the google results. https://www.google.com/search?q=site%3Amshowells.com&oq=site%3A&aqs=chrome.0.69i59j69i57j69i58.1905j0j1&sourceid=chrome&es_sm=91&ie=UTF-8 (odd links show up on pages 2-3+)
Technical SEO | | mshowells0 -
Google is not respecting the meta title
We're experiencing a peculiar situation with Google not respecting our meta <title>.</p> <p>As you can see in the first image (search result), the title <a href="http://open.iebschool.com/profesores/startups/">for the page</a> is a part of the content. This is relatevely normal for the description, but we never heard of Google doing this before.</p> <p>In the code, the <title> and meta description are correctly implemented.</p> <blockquote style="background-color: #f7f7f7; padding-top: 5px; margin-left: 0px; padding-left: 2px; padding-bottom: 5px; white-space: nowrap; overflow-y: auto; font-family: monospace; background-position: initial initial; background-repeat: initial initial;"> <p><meta name="description" content="Profesores, tutores, autores y docentes 2.0 de Open IEBS. Conoce su Biografía, experiencia, reputación, conexiones sociales y las valoraciones de alumnos."/><br /><title>Conoce los profesores, tutores, autores y docentes de Open IEBS.</title> In a further research, we discovered that the title which is using is an in anwith the following code (cleaned and simplified for the question): <hgroup> Pilar Soro
Technical SEO | | ofuente
0 Seguidor
Para poder seguir al Profesor, debes de registrarte aquí. Profesora y experta en redes sociales. Formadora de docentes, [...]
</hgroup> Note: we're correcting the code since this is quite messy, but it's the one we have now The point is that google has considered that this particular is more important than the title itself. This would make sense if we were looking for that name, but the search was simply "site:domain.com". Two things for which this is even more strange are the following: while all the /profesor/%category%/ has the same code, this only happens in some search results and not in all of them; why is it appearing in some pages, but respecting my title in others? the previous code is not the only one in the page, there are about 10 others and some are placed before and some are placed after; so, why this one and not the first or the last? What is more strange is why this article in particular and not any other of the 10 on the page since some of them are placed before and some of them are placed after. Provided this situation, we would like to know: is this a common situation? Is it happening to more people? why is it happening? Is it somehow related to , <hgroup>and ? why that piece of code and not any other article? and why is it only happening in some pages? more important, can it be corrected or can we take advantage of it somehow? Thank you in advance. Any light you can shed on this will be well received! AJ2CUSe.png?1?8232 </hgroup>0 -
How to rank in Google Places
Normally, I don't have a problem with local SEO (more of a multi-channel sort of online marketing guy) but this one has got me scratching my head. Look at https://www.google.co.uk/search?q=wedding+venues+in+essex Theres two websites there (fennes and quendon park) that both have a much more powerful DA but don't appear in the Google Places (Google + Business or whatever it's labeled as). Why are websites such as Boreham house ranking top in the map listings? Quendon Park has a Google places listing, it's full of content, the NAP all matches up. Its a stronger website. Boreham House isn't any closer to the centroid than Quendon Park Just got me struggling this one
Technical SEO | | jasonwdexter0 -
Google autorship in specific field?
Hi, I want to ask you about something I 've read about google and authorship. It is written that it is better to show yourself as a author in a specific field. I myself have knowledge and interest in many fields - like SEO, vegan living, martial arts. And I want to be seen as specialist in all of them. Does it mean that we are limited to mark with autorship articles in only one field, in order to be seen as expert in a specific field? f.e. Should I mark with "rel=author" the articles that are about SEO because I want to be seen as author in that specific field for sure. Iif I mark with "rel=author" articles also about martial arts would these affect the understanding about my expertise in SEO?
Technical SEO | | vladokan0 -
Rankings for Google Play Pages
Hey all, I'm relatively new here and certainly new to posting in the forums and interacting with the community but I hope to be much more active in the coming months. I have what might be a silly question regarding search results for a Google Play store-specific query. The company in question has their main North American app that's been out for a month and a half and then an International version that was released just a few days ago. If you run a Google search (NOT a search witin Google Play) for 'Google Play Company Name' the more recent (but less used and ultimately less important, at least for the time being) International app is higher in the SERP than the more used and reviewed North American app. I'm guessing that this is something that will correct itself over the next week as the North American app establishes itself as the more important of the two, but I figured it couldn't hurt to ask just in case there's something they can do to affect the results a little quicker. Any advice, input or just a verification of my guess would be greatly appreciated!
Technical SEO | | JDMcNamara0 -
Hit by Google
My site - www.northernlightsiceland.com - has been hit by google and Im not sure why. The traffic dropped 75% last 24 hours and all the most important keywords have dropped significantly in the SERP. The only issue I can think of are the subpages for the northern lights forecasting I did every day e.g. http://www.northernlightsiceland.com/northern-lights-forecast-iceland-3-oct-2012/ I have been simply doing a copy/paste for 1 month the same subpage, but only changing the top part (Summary) for each day. Could this be the reason why Im penalized? I have now simply taken them all down minus the last 3 days (that are relevant). What can I do to get up on my feet again? This is mission critical for me as you can imagine. Im wondering if it got hit by this EMD update on 28 sept that was focusing on exact match domains http://www.webmasterworld.com/google/4501349-1-30.htm
Technical SEO | | rrrobertsson0 -
Google Webmaster Sitemap *pending*
Hey guys, I've noticed that my sitemap has been "pending" for quite some time in Google Webmaster tools. This leads me to believe that Google is not indexing my site. Could someone help me and point me to what I'm doing wrong? My site is The Tech Block
Technical SEO | | ttb0 -
Mysterious drop of website ranking in google
Usually, I don't want to bother anybody by posting silly questions on forums. But this time I really might need advice. My wife and I took over the website maintenance and e-marketing of a local air conditioning company end of March this year. Before that the applied SEO strategies were not very user friendly and a little too search engine focused (spammy keyword stuffed articles, confusing website structure, a lot of directory links). Yesterday night (May 15th) the website more or less stopped ranking. For search terms like "ac repair englewood fl" or "trane north port" and many more the website was on page 1. Here are some more details: I replaced the old website with a newer version end of April. Since some of old the url structure did not apply any longer, I did a setup of around 30 301-redirects in .htaccess. The new site seemed to rank more or less as expected. The homepage has a PakeRank of 1 (seomoz Page Authority is 31). I am working on that but good natural links just take some time. site:kobiecomplete.com still brings up all the pages Google Webmaster Tools notified me on May 12th that there was a possible outage: _"_While crawling your site, we have noticed an increase in the number of transient soft 404 errors around 2012-05-08 16:00 UTC (London, Dublin, Edinburgh). Your site may have experienced outages. These issues may have been resolved. Here are some sample pages that resulted in soft 404 errors:" The listed pages under "some sample pages" are only pages from the old website which do not exist any longer and the 301 redirect was not setup. But this should have been already any issue before, if at all.
Technical SEO | | grojoh
I added the missing 301 redirects and marked them as fixed in Google Webmaster Tools. I had a copy of the website on a testing webspace (root directory of brightsidewg.com). Even though I had robots.txt set to disallow everything and WordPress search engine privacy set to do not index / follow, the website appeared on the Google search results yesterday night instead of the original website (kobiecomplete.com). Even though brightsidewg was a few ranks worse than kobiecomplete.com was, it was still ranking.
To remove the duplicate content, I deleted everything on brightsidewg.com and requested the removal of the website in the Webmaster Tools. Now brightsidewg.com is not any longer indexed (good) but it didn't help the ranking of kobiecomplete.com. Especially the homepage and the service area pages were ranking pretty decent on Google before yesterday night. Now I can not find them at all. Only other less important pages rank on page 8+ No malware on website I did not do any big changes on the website yesterday (only really minor ones). I did not acquire any weird/paid links even though there is a new link from a PageRank 0 website which I did not setup: http://www.indo-karya.com/detail/news/2012/kombise But that alone I think would not be enough for a penalty. It almost looks like that Google applied a partial -950 filter!? I could submit the website for reconsideration to Google and tell them about the duplicate content issue with my testing webspace brightsidewg.com. What do you think about it and what shall I do? Thank you so much for any help!0