Location Based Content / Googlebot
-
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
-
I believe the current progress is pretty much relevant to user but do provide the option to change the location if user want to manually change it! (it will be a good user experience)
To get all links crawled by search engine, here are few things that you should consider!
- Make sure sitemap have all links appearing that have on the website. Including all the links in the xml sitemap will help Google to consider those pages
- Point links to all location pages. This will help Google to consider indexing those pages and make it rank for relevant terms.
- Social Signals are important try to get social value of all location pages as Google usually crawl pages with good social value!
I think the current approach is awesome just add manually change location option if a visitor wants it.
-
Thanks Jarno
-
David,
well explained. Excellent post +1
Jarno
-
Hi,
In regards to the geo-targeting, have a read of this case study. To me it's the definitive guide to the issue as it goes through most of the options available, and offers a pretty solid solution:
http://www.seomoz.org/ugc/territory-sensitive-international-seo-a-case-study
And if you are worrying about the white/black aspects of using these tactics, here is a great guide from Rand on acceptable cloaking techniques:
http://www.seomoz.org/blog/white-hat-cloaking-it-exists-its-permitted-its-useful
And finally a great 'Geo-targetting FAQ' piece from Tom Critchlow:
http://www.seomoz.org/blog/geolocation-international-seo-faq
In regards to the other locations ranking that you don't think have been crawled, this is probably down to the number/strength of the links pointing at this sections. Google have stated in various Webmaster videos that a page doesn't neccessarily need to be crawled to be indexed (weird huh?), Google just needs to know it exists.
If there were plenty of links point at a page, Google would still believe it's an authoritative/relevant result even if it hasn't crawled the page content itself. It can use other signals such as anchor text to determine the relevancy for a given search term.
Here is an example video from Matt Cutts where he discusses the issue:
http://www.youtube.com/watch?v=KBdEwpRQRD0
Best of luck
David
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Canonical vs Alternate for country based subdomain dupe content?
What's the correct method for tagging dupe content between country based subdomains? We have: mydomain.com // default, en-us www.mydomain.com // en-us uk.mydomain.com // uk, en-gb au.mydomain.com // australia, en-au eu.mydomain.com // europe, en-eu In the header of each we currently have rel="alternate" tags but we're still getting dupe content warnings in Moz for the "WWW" subdomain. Question 1) Are we headed in the right direction with using alternate? Or would it be better to use canonical since the languages are technically all English, just different regions. The content is pretty much the same minus currency and localization differences. Question 2) How can we solve the dupe content between WWW and the base domain, since the above isn't working. Thanks so much
Technical SEO | | lvdh11 -
Approach for an established site looking to serve different content to regions in a single country/lang
Hi guys, I have an established site that currently serves the same content to all regions - west and east - in a single country with the same language. We are now looking to vary the content across west and east regions - not dramatically, but the products offered will be slightly different. From what i gather, modifying the url is best for countries, so feels like overkill for regions within the same country. I'm also unlikely to have very unique content, outside of the varied products, so I'm mindful of duplicate/similar content, but I know I can use canonical tags to address. I have a fairly modern CMS that can target content based on region, but mindful of upsetting Google re; showing different content to what the bot might encounter, assuming this is still a thing. So, three questions from an SEO perspective - Do i need to really focus on changing my url structure, especially as I'm already established in a competitive market, or will I do more harm than good? Is the region in the URL a strong signal? If I should make some changes to the url and/or metadata, what are the best bang for buck changes you would make? How does Google Local fit into this? Is it a separate process via webmaster tools, or does it align to the above changes? Cheers!!! Jez
Technical SEO | | jez0000 -
Http:// to https:// 301 or 302 redirect
I've read over the Q & A in the Community, but am wondering the reasoning behind this issue. I know - 301's are permanent and pass links, and 302s are temporary (due to cache) and don't pass links. But, I've run across two sites now that 302 redirect http:// to https://. Is there a valid reason behind this? From my POV and research, the redirect should 301 if it's permanent, but is there a larger issue I am missing?
Technical SEO | | FOTF_DigitalMarketing1 -
Domain Forwarding / Multiple Domain Names / or Rebuild Blogs on them
I am considering forwarding 3 very aged and valuable domain names to my main site. There were once over 100 blog posts on each blog and each one has a page authority of 45 and domain authority of 37. My question is should i put up three blogs on the domains and link them to my site or should i just forward the domains to my main site? Which will provide me with more value. I have the capability to have some one blog on them every day. However, i do not have access to any of the old blog posts. I guess i could scrape it of archive.org. Any advice would be appreciated. Scott
Technical SEO | | WindshieldGuy-2762210 -
Block /tag/ or not?
I've asked this question in another area but now i want to ask it as a bigger question. Do we block /tag/ with robots.txt or not. Here's why I ask: My wordpress site does not block /tag/ and I have many /tag/ results in the top 10 results of Google. Have for months. The question is, does Google see /tag/ on WordPress as duplicate content? SEOMoz says it's duplicate content but it's a tag. It's not really content per say. I'm all for optimizing my site but Google is not penalizing me for /tag/ results. I don't want to block /tag/ if Google is not seeing it as duplicate content for only one reason and that's because I have many results in the top 10 on G. So, can someone who knows more about this weigh in on the subject for I really would like a accurate answer. Thanks in advance...
Technical SEO | | MyAllenMedia0 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
How much javascript does Googlebot read
We have a site where we have certain navigational links solely for the human user. These links help the user experience and lead to pages that we don't need crawled by googlebot. We have these links in javascript so if you disable javascript these links are invisible. Will these links be considered cloaking even though our intention is not to cloak but save our Google crawl for pages we do want indexed?
Technical SEO | | CruiseControl0