Location Based Content / Googlebot
-
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
-
I believe the current progress is pretty much relevant to user but do provide the option to change the location if user want to manually change it! (it will be a good user experience)
To get all links crawled by search engine, here are few things that you should consider!
- Make sure sitemap have all links appearing that have on the website. Including all the links in the xml sitemap will help Google to consider those pages
- Point links to all location pages. This will help Google to consider indexing those pages and make it rank for relevant terms.
- Social Signals are important try to get social value of all location pages as Google usually crawl pages with good social value!
I think the current approach is awesome just add manually change location option if a visitor wants it.
-
Thanks Jarno
-
David,
well explained. Excellent post +1
Jarno
-
Hi,
In regards to the geo-targeting, have a read of this case study. To me it's the definitive guide to the issue as it goes through most of the options available, and offers a pretty solid solution:
http://www.seomoz.org/ugc/territory-sensitive-international-seo-a-case-study
And if you are worrying about the white/black aspects of using these tactics, here is a great guide from Rand on acceptable cloaking techniques:
http://www.seomoz.org/blog/white-hat-cloaking-it-exists-its-permitted-its-useful
And finally a great 'Geo-targetting FAQ' piece from Tom Critchlow:
http://www.seomoz.org/blog/geolocation-international-seo-faq
In regards to the other locations ranking that you don't think have been crawled, this is probably down to the number/strength of the links pointing at this sections. Google have stated in various Webmaster videos that a page doesn't neccessarily need to be crawled to be indexed (weird huh?), Google just needs to know it exists.
If there were plenty of links point at a page, Google would still believe it's an authoritative/relevant result even if it hasn't crawled the page content itself. It can use other signals such as anchor text to determine the relevancy for a given search term.
Here is an example video from Matt Cutts where he discusses the issue:
http://www.youtube.com/watch?v=KBdEwpRQRD0
Best of luck
David
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Stop Google indexing entire website based on search location
OK - bear with me... We have a .co.uk website. However, we only want it indexing in the US Google and NOT the UK Google. Is there a way of configuring this in Search Console /Webmaster tools?
Technical SEO | | AbsoluteDesign0 -
Duplicate Page Content Issue
Hello, I recently solved www / no www duplicate issue for my website, but now I am in trouble with duplicate content again. This time something that I cannot understand happens: In Crawl Issues Report, I received Duplicate Page Content for http://yourappliancerepairla.com (DA 19) http://yourappliancerepairla.com/index.html (DA 1) Could you please help me figure out what is happenning here? By default, index.html is being loaded, but this is the only index.html I have in the folder. And it looks like the crawler sees two different pages with different DA... What should I do to handle this issue?
Technical SEO | | kirupa0 -
Duplicate Content?
My site has been archiving our newsletters since 2001. It's been helpful because our site visitors can search a database for ideas from those newsletters. (There are hundreds of pages with similar titles: archive1-Jan2000, archive2-feb2000, archive3-mar2000, etc.) But, I see they are being marked as "similar content." Even though the actual page content is not the same. Could this adversely affect SEO? And if so, how can I correct it? Would a separate folder of archived pages with a "nofollow robot" solve this issue? And would my site visitors still be able to search within the site with a nofollow robot?
Technical SEO | | sakeith0 -
Reusing content owned by the client on websites for other locations?
Hello All! Newbie here, so I'm working through some of my questions 🙂 I do have two major question regarding duplicate content: _Say a medical hospital has 4 locations, and chooses to create 4 separate websites. Each website would have the same design, but different NAP, and contact info, etc. Essentially, we'd be looking at creating their own branded template. _ My question 1.) If the hospitals all offer similar services, with roughly the same nav, does it make sense to have multiple websites? I figure this makes the most sense in terms of optimizing for their differing locations. 2.) If the hospital owns the content on the first site, I'm assuming it is still necessary to change it duplicates for the other properties? Or is it possible to differentiate between the duplication of owned content from other instances of content duplication? Everyone has been fantastic here so far, looking forward to some feedback!
Technical SEO | | kbaltzell0 -
/~username
Hello, The utility on this site that crawls your site and highlights what it sees as potential problems reported an issue with /~username access seeing it as duplicate content i.e. mydomain.com/file.htm is the same as mydomain.com~/username/file.htm so I went to my server hosts and they disabled it using mod_userdir but GWT now gives loads of 404 errors. Have I gone about this the wrong way or was it not really a problem in the first place or have I fixed something that wasn't broken and made things worse? Thanks, Ian
Technical SEO | | jwdl0 -
To 301 redirect or not to 301 redirect? duplicate content problem www.domain.com and www.domain.com/en/
Hello, If your website is getting flagged for duplicate content from your main domain www.domain.com and your multilingual english domain www.domain.com/en/ is it wise to 301 redirect the english multilingual website to the main site? Please advise. We've recently installed the joomish component to one of our joomla websites in an effort to streamline a spanish translation of the website. The translation was a success and the new spanish webpages were indexed but unfortunately one of the web developers enabled the english part of the component and some english webpages were also indexed under the multilingual english domain www.domain.com/en/ and that flagged us for duplicate content. I added a 301 redirect to redirect all visitors from the www.domain/en/ webpages to the main www.domain.com/ webpages. But is that the proper way of handling this problem? Please advise.
Technical SEO | | Chris-CA0 -
Trackback/Syndication
Using wordpress or any other blog to properly syndicate an article without duplication risk. Can I trackback by just leaving a link to the original within or at the bottom of a post or is there a specific code to add.. What is the best way to trackback?
Technical SEO | | SEODinosaur0 -
Google Off/On Tags
I came across this article about telling google not to crawl a portion of a webpage, but I never hear anyone in the SEO community talk about them. http://perishablepress.com/press/2009/08/23/tell-google-to-not-index-certain-parts-of-your-page/ Does anyone use these and find them to be effective? If not, how do you suggest noindexing/canonicalizing a portion of a page to avoid duplicate content that shows up on multiple pages?
Technical SEO | | Hakkasan1