Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Location Based Content / Googlebot
-
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
-
I believe the current progress is pretty much relevant to user but do provide the option to change the location if user want to manually change it! (it will be a good user experience)
To get all links crawled by search engine, here are few things that you should consider!
- Make sure sitemap have all links appearing that have on the website. Including all the links in the xml sitemap will help Google to consider those pages
- Point links to all location pages. This will help Google to consider indexing those pages and make it rank for relevant terms.
- Social Signals are important try to get social value of all location pages as Google usually crawl pages with good social value!
I think the current approach is awesome just add manually change location option if a visitor wants it.
-
Thanks Jarno

-
David,
well explained. Excellent post +1
Jarno
-
Hi,
In regards to the geo-targeting, have a read of this case study. To me it's the definitive guide to the issue as it goes through most of the options available, and offers a pretty solid solution:
http://www.seomoz.org/ugc/territory-sensitive-international-seo-a-case-study
And if you are worrying about the white/black aspects of using these tactics, here is a great guide from Rand on acceptable cloaking techniques:
http://www.seomoz.org/blog/white-hat-cloaking-it-exists-its-permitted-its-useful
And finally a great 'Geo-targetting FAQ' piece from Tom Critchlow:
http://www.seomoz.org/blog/geolocation-international-seo-faq
In regards to the other locations ranking that you don't think have been crawled, this is probably down to the number/strength of the links pointing at this sections. Google have stated in various Webmaster videos that a page doesn't neccessarily need to be crawled to be indexed (weird huh?), Google just needs to know it exists.
If there were plenty of links point at a page, Google would still believe it's an authoritative/relevant result even if it hasn't crawled the page content itself. It can use other signals such as anchor text to determine the relevancy for a given search term.
Here is an example video from Matt Cutts where he discusses the issue:
http://www.youtube.com/watch?v=KBdEwpRQRD0
Best of luck
David
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
Do I use /es/, /mx/ or /es-mx/ for my Spanish site for Mexico only
I currently have the Spanish version of my site under myurl.com/es/ When I was at Pubcon in Vegas last year a panel reviewed my site and said the Spanish version should be in /mx/ rather than /es/ since es is for Spain only and my site is for Mexico only. Today while trying to find information on the web I found /es-mx/ as a possibility. I am changing my site and was planning to change to /mx/ but want confirmation on the correct way to do this. Does anyone have a link to Google documentation that will tell me for sure what to use here? The documentation I read led me to the /es/ but I cannot find that now.
Technical SEO | | RoxBrock0 -
How to Delete the slug /category/ from wordpress category pages
Hi all, I would like to ask you what's the better way to eliminate the slug /category/ form the wordpress category pages. I need to delete the slug /category/ to make the url seo frendly. The problem is that my site is an old site with the page indexed by Google for a long time. Thanks for your advice.
Technical SEO | | salvyy0 -
How to create unique content for businesses with multiple locations?
I have a client that owns one franchise location of a franchise company with multiple locations. They have one large site with each location owning it's own page on the site, which I feel is the best route. The problem is that each location page has basically duplicate content on each page resulting in like 80 pages of duplicate content. I'm looking for advice on how to create unique content for each location page? What types of information can we write about to make each page unique, because you can only twist sentences and content around so much before it just all sounds cookie cutter and therefore offering little value.
Technical SEO | | RonMedlin0 -
Home Page .index.htm and .com Duplicate Page Content/Title
I have been whittling away at the duplicate content on my clients' sites, thanks to SEOmoz's pro report, and have been getting push back from the account manager at register.com (the site was built here and the owner doesn't want to move it). He says these are the exact same page and he can't access one to redirect to the other. Any suggestions? The SEOmoz report says there is duplicate content on both these urls: Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/index.htm Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/ Your help is greatly appreciated! Sheryl
Technical SEO | | TOMMarketingLtd.0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0 -
Are recipes excluded from duplicate content?
Does anyone know how recipes are treated by search engines? For example, I know press releases are expected to have lots of duplicates out there so they aren't penalized. Does anyone know if recipes are treated the same way. For example, if you Google "three cheese beef pasta shells" you get the first two results with identical content.
Technical SEO | | RiseSEO0