Google crawling different content--ever ok?
-
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations:
1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps.
2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user.
So, what do you think? Are these scenarios a concern for getting penalized by Google?
Thanks, Ted
-
Thanks Cyrus. Very good points!
-
Thanks Sheena. In the second scenario good point--they are generated via user POST so in theory Google should never see them or index them, but since they can be shared Google ends up finding them, so I do need to make sure Google doesn't index them if possible.
-
This is not the definition of cloaking and I wouldn't worry too much about any penalty.
That said, anytime you redirect googlebot to a different experience than users it's a situation you want to be very careful with, and in most situations avoid. Often this is solved by serving different experiences via javascript. Even though Google is pretty darn good at parsing javascript, they will often interpret the default version of a page as if the javascript is turned off.
Regardless, I'd keep an eye on search results, Google Webmaster Tools, cached versions of your site and make ample use of "Fetch and Render" in GWT to ensure Google interprets your site they way you think it should.
-
I do not have experience with any site using this type of selector, but theoretically you should not encounter any problems as you're showing different content with the intent of improving the experience, not to deceive. If Google handles this like an ip-redirect, then you should be fine.
In scenario 2, however, I'm wondering if you even want Google to index these URLs - since it sounds like these URLs will be dynamically generated & might end up being duplicates of other pages on the site (similar to internal search pages). Something to watch out for!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google image search
How does google decide which image show up in the image search section ? Is is based on the alt tag of the image or is google able to detect what is image is about using neural nets ? If it is using neural nets are the images you put on your website taken into account to rank a page ? Let's say I do walking tours in Italy and put a picture of the leaning tower of pisa as a top image while I be penalised because even though the picture is in italy, you don't see anyone walking ? Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
Google Indexing
Hi We have roughly 8500 pages in our website. Google had indexed almost 6000 of them, but now suddenly I see that the pages indexed has gone to 45. Any possible explanations why this might be happening and what can be done for it. Thanks, Priyam
Intermediate & Advanced SEO | | kh-priyam0 -
Did Google Ignore My Links?
Hello, I'm a little new to SEO, but I recently was featured (around 2 yrs ago) on some MAJOR tech blogs. For some reason however, my links aren't getting picked up for over 2 years - not even in MOZ, or other link checker services. - By now I should have had amazing boost from this natural building, but not sure what happened? This was completely white hat and natural links. The links were after the article was created though, would this effect things? - Please let me know if you have any advice! - Maybe I need to ping these some how or something? - Are these worthless? Thanks so much for your help! Here's some samples of the links that were naturally given to http://VaultFeed.com http://thenextweb.com/microsoft/2013/09/13/microsoft-posts-cringe-worthy-windows-phone-video-ads-mocking-apple/ http://www.theverge.com/2013/9/15/4733176/microsoft-says-pulled-iphone-parody-ads-were-off-the-mark http://www.theregister.co.uk/2013/09/16/microsoft_mocks_apple_in_vids_it_quickly_pulls/ http://www.dailymail.co.uk/sciencetech/article-2420710/Microsoft-forced-delete-cringe-worthy-spoof-videos-mocking-new-range-iPhones.html And a LOT more... Not sure if these links will never be valid, or maybe I'm doing something completely wrong? - Is there any way for Google to recognize these now, and then they'll be seen by MOZ and other sites too? I've done a LOT of searching and there's no definitive advice I've seen for links that were added after the URL was first indexed by Google.
Intermediate & Advanced SEO | | DByers0 -
Google crawling 200 page site thousands of times/day. Why?
Hello all, I'm looking at something a bit wonky for one of the websites I manage. It's similar enough to other websites I manage (built on a template) that I'm surprised to see this issue occurring. The xml sitemap submitted shows Google there are 229 pages on the site. Starting in the beginning of December Google really ramped up their intensity in crawling the site. At its high point Google crawled 13,359 pages in a single day. I mentioned I manage other similar sites - this is a very unusual spike. There are no resources like infinite scroll that auto generates content and would cause Google some grief. So follow up questions to my "why?" is "how is this affecting my SEO efforts?" and "what do I do about it?". I've never encountered this before, but I think limiting my crawl budget would be treating the symptom instead of finding the cure. Any advice is appreciated. Thanks! *edited for grammar.
Intermediate & Advanced SEO | | brettmandoes0 -
Fetch as Google
I have odd scenario I don't know if anyone can help? I've done some serious speed optimisation on a website, amongst other things CDN and caching. However when I do a Search Console Fetch As Google It is still showing 1.7 seconds download time even though the cached content seems to be delivered in less than 200 ms. The site is using SSL which obviously creams off a bit of speed, but I still don't understand the huge discrepancy. Could it be that Google somehow is forcing the server to deliver fresh content despite settings to deliver cache? Thanks in advance
Intermediate & Advanced SEO | | seoman100 -
Google Search Console Crawl Errors?
We are using Google Search Console to monitor Crawl Errors. It seems Google is listing errors that are not actual errors. For instance, it shows this as "Not found": https://tapgoods.com/products/tapgoods__8_ft_plastic_tables_11_available So the page does not exist, but we cannot find any pages linking to it. It has a tab that shows Linked From, but if I look at the source of those pages, the link is not there. In this case, it is showing the front page (listed twice, both for http and https). Also, one of the pages it shows as linking to the non-existant page above is a non-existant page. We marked all the errors as fixed last week and then this week they came up again. 2/3 are the same pages we marked as fixed last week. Is this an issue with Google Search Console? Are we getting penalized for a non existant issue?
Intermediate & Advanced SEO | | TapGoods0 -
What are your thoughts on Content Automation?
Hi, I want to ask forum members’ opinion on content automation. And before I raise the eyebrows of many of you with this question, I’d like to state I am creating content and doing SEO for my own website so I’m not looking to cut corners with spammy tactics that could hurt my website from an organic search perspective. The goal is to automate pages in the areas of headings, Meta Titles, Meta Descriptions, and perhaps a paragraph of content. More importantly, I’d like these pages to add value to the users experience so the question is…. How do I go about automating the pages, and more specifically, how is meta title, meta descriptions etc. automated? I’d also like to hear from people that recommend steering clear of any form of content automation. I hope my question isn’t too bit vague and I look forward to hearing from other Mozzers. Regards, Russell in South Africa
Intermediate & Advanced SEO | | Shamima0 -
Bi-Lingual Site: Lack of Translated Content & Duplicate Content
One of our clients has a blog with an English and Spanish version of every blog post. It's in WordPress and we're using the Q-Translate plugin. The problem is that my company is publishing blog posts in English only. The client is then responsible for having the piece translated, at which point we can add the translation to the blog. So the process is working like this: We add the post in English. We literally copy the exact same English content to the Spanish version, to serve as a placeholder until it's translated by the client. (*Question on this below) We give the Spanish page a placeholder title tag, so at least the title tags will not be duplicate in the mean time. We publish. Two pages go live with the exact same content and different title tags. A week or more later, we get the translated version of the post, and add that as the Spanish version, updating the content, links, and meta data. Our posts typically get indexed very quickly, so I'm worried that this is creating a duplicate content issue. What do you think? What we're noticing is that growth in search traffic is much flatter than it usually is after the first month of a new client blog. I'm looking for any suggestions and advice to make this process more successful for the client. *Would it be better to leave the Spanish page blank? Or add a sentence like: "This post is only available in English" with a link to the English version? Additionally, if you know of a relatively inexpensive but high-quality translation service that can turn these translations around quicker than my client can, I would love to hear about it. Thanks! David
Intermediate & Advanced SEO | | djreich0