Location Based Content / Googlebot
-
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
-
I believe the current progress is pretty much relevant to user but do provide the option to change the location if user want to manually change it! (it will be a good user experience)
To get all links crawled by search engine, here are few things that you should consider!
- Make sure sitemap have all links appearing that have on the website. Including all the links in the xml sitemap will help Google to consider those pages
- Point links to all location pages. This will help Google to consider indexing those pages and make it rank for relevant terms.
- Social Signals are important try to get social value of all location pages as Google usually crawl pages with good social value!
I think the current approach is awesome just add manually change location option if a visitor wants it.
-
Thanks Jarno
-
David,
well explained. Excellent post +1
Jarno
-
Hi,
In regards to the geo-targeting, have a read of this case study. To me it's the definitive guide to the issue as it goes through most of the options available, and offers a pretty solid solution:
http://www.seomoz.org/ugc/territory-sensitive-international-seo-a-case-study
And if you are worrying about the white/black aspects of using these tactics, here is a great guide from Rand on acceptable cloaking techniques:
http://www.seomoz.org/blog/white-hat-cloaking-it-exists-its-permitted-its-useful
And finally a great 'Geo-targetting FAQ' piece from Tom Critchlow:
http://www.seomoz.org/blog/geolocation-international-seo-faq
In regards to the other locations ranking that you don't think have been crawled, this is probably down to the number/strength of the links pointing at this sections. Google have stated in various Webmaster videos that a page doesn't neccessarily need to be crawled to be indexed (weird huh?), Google just needs to know it exists.
If there were plenty of links point at a page, Google would still believe it's an authoritative/relevant result even if it hasn't crawled the page content itself. It can use other signals such as anchor text to determine the relevancy for a given search term.
Here is an example video from Matt Cutts where he discusses the issue:
http://www.youtube.com/watch?v=KBdEwpRQRD0
Best of luck
David
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will adding /blog/ to my urls affect SEO rankings?
Following advice from an external SEO agency I removed /blog/ from our permalinks late last year. The logic was that it a) doesn't help SEO and b) reduces the character count for the slug. Both points make sense. However, it makes segmenting blog posts from other content in Google Analytics impossible. If I were to add /blog/ back into my URLs, and redirected the permalinks, would it harm my rankings? Thanks!
Technical SEO | | GerardAdlum0 -
Confused about repeated occurences of URL/essayorg/topic/ showing up as 404 errors in our site logs
Working on a Wordpress website, https://thedoctorwithin.comScanning the site’s 404 errors, I’m seeing a lot of searches for URL/essayorg/topic, coming from Bingbot, as well as other spiders (Google, OpensiteExlorer). We get at least 200 of these irrelevant requests per week. Seems like each topic that follows /essayorg/ is unique. Some include typos: /dissitation/Haven't done a verification to make sure the spiders are who they say they are, yet.Almost seems like there are many links ‘in the wild’ intended for Essay.Org that are being directed towards the site I’m working on.I've considered redirecting any requests for URL/essayorg/ to our sitemap… figuring that might encourage further spidering of actual site content. Is redirection to our sitemap xml file a good idea, or might doing so have unintended consequences? Interested in suggestions about why this might be occurring. Thank you.
Technical SEO | | linkjuiced0 -
Will this URL structure: "domain.com/s/content-title" cause problems?
Hey all, We have a new in-house built too for building content. The problem is it inserts a letter directly after the domain automatically. The content we build with these pages aren't all related, so we could end up with a bunch of urls like this: domain.com/s/some-calculator
Technical SEO | | joshuaboyd
domain.com/s/some-infographic
domain.com/s/some-long-form-blog-post
domain.com/s/some-product-page Could this cause any significant issues down the line?0 -
Two domains / same Content
Hi MOZzers, I have recently started working for a client who owns two domains (as recommended by their Web Development company), each omain is a complete duplication of the other. The only difference is one is a totally keyword focused domain name, the other is their brand name which also contains keyword. In a search for blocks of content the keyword focused domain comes up, the other doesn't and when I conducted a search for one of their primary services again the keyword focused domain name came up on the first page, but the branded search also appeared on the second. The web development company have been managing this company's Adwords account and promoting their brand name and up until today I was unaware of the other. Can I have some thoughts - do I ask the web developers to re-direct one to the other, or leave as it?
Technical SEO | | musthavemarketing0 -
Page Content
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered exactly same title for their listings which has caused us having hundreds of similar page title. We have corrected all the pages which had similar meta tag and duplicate page title tags. We have also inserted controls to our software to prevent generating duplicate page title tags or meta tags. But also the page content quality not very good because page content added by customer.(example: http://www.enakliyat.com.tr/detaylar/evden-eve--6001) What should I do. Please help me.
Technical SEO | | iskq0 -
Avoiding Duplicate Content in E-Commerce Product Search/Sorting Results
How do you handle sorting on ecommerce sites? Does it look something like this? For Example: example.com/inventory.php example.com/inventory.php?category=used example.com/inventory.php?category=used&price=high example.com/inventory.php?category=used&location=seattle If not, how would you handle this? If so, would you just include a no-index tag on all sorted pages to avoid duplicate content issues? Also, how does pagination play into this? Would it be something like this? For Example: example.com/inventory.php?category=used&price=high__ example.com/inventory.php?category=used&price=high&page=2 example.com/inventory.php?category=used&price=high&page=3 If not, how would you handle this? If so, would you still include a no-index tag? Would you include a rel=next/prev tag on these pages in addition to or instead of the no-index tag? I hope this makes sense. Let me know if you need me to clarify any of this. Thanks in advance for your help!
Technical SEO | | AlexanderAvery1 -
301'ing googlebot
I have a client that has been 301’ing googlebot to the canonical page. This is because they have a cart_id and session parameters in urls. This is mainly from when googlebot comes in on a link that has these parameters in the URL, as they don’t serve these parameters up to googlebot at all once it starts to crawl the site.
Technical SEO | | AlanMosley
I am worried about cloaking; I wanted to know if anyone has any info on this.
I know that Google have said that doing anything where you detect goolgebots useragent and treat them different is a problem.
Anybody had any experience on this, I would be glad to hear.0 -
Duplicate Content
Hello All, my first web crawl has come back with a duplicate content warning for www.simodal.com and www.simodal.com/index.htm slightly mystified! thanks paul
Technical SEO | | simodal0