Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Personalized Content Vs. Cloaking
-
Hi Moz Community,
I have a question about personalization of content, can we serve personalized content without being penalized for serving different content to robots vs. users? If content starts in the same initial state for all users, including crawlers, is it safe to assume there should be no impact on SEO because personalization will not happen for anyone until there is some interaction?
Thanks,
-
It sounds like you're on the right track. If users and bots start off with the same content, that's a good start.
From there, the question is "how much content is being customized, and how frequently?" For example, if you're swapping out 5 different headlines for 40% of users, and 60% of users see the original, that's not a big deal, particularly if the rest of the page is the same.
But if you're swapping out 80% of page copy (eg removing a bunch of excess copy that is shown for SEO purposes), and 60-90% of users are seeing that "light" version of the page, you run the risk of two things:
- First, the chance that it wouldn't pass a manual review if one was performed.
- Second, the chance that Google may render a copy of the page as a user (not announcing themselves as a crawler), seeing a different version of the page multiple times, and then effectively devaluing the missing content, or worse, flagging the page in their system as cloaked content.
We could get lost in details of whether or not they're doing this, or how they're doing this, but from a technology standpoint it's pretty simply for them to render content from non-official IPs and user-agents and do an 'honesty check' for situations where content is showing up multiple ways. This is already how them compare the page on desktop vs mobile to see which sections of the page render, and which are changed.
I think you are also right to rely on site interaction before personalizing, but since there are multiple ways to do that, you should know that it's possible for Google to simulate some of those interactions. So there's a chance at some point they will render your content in a personalized manner, particularly if personalization is the result of visiting a URL or clicking a simple toggle switch or button.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Inconsistency between content and structured data markup
Hi~ everyone What does Google think about the inconsistency between content and structured data markup? Is this kind of a cheating way ? Is hurt my SEO?
Technical SEO | | intern2020120 -
What to do with old content after 301 redirect
I'm going through all our blog and FAQ pages to see which ones are performing well and which ones are competing with one another. Basically doing an SEO content clean up. Is there any SEO benefit to keeping the page published vs trashing it after you apply a 301 redirect to a better performing page?
Technical SEO | | LindsayE0 -
Root directory vs. subdirectories
Hello. How much more important does Google consider pages in the root directory relative to pages in a subdirectory? Is it best to keep the most important pages of a site in the root directory? Thanks!
Technical SEO | | nyc-seo0 -
Does Google know what footer content is?
We plan to do away with fixed footer content and make, for the most part, the content in the traditional footer area unique just like the 'main' part of the content. This begs the question, do Google know what is footer content as opposed to main on page content?
Technical SEO | | NeilD0 -
#hashtag Anchor text within content
Hi, i have a question about anchor text within my sites content. It 'jumps' to content displayed further down the page via a side navigation at the top. These links don't take you away to any other page, instead take you further down the page to the relavent content. My question is this: I've noticed in the URL that the anchor text - #jumpnavlink is placed at the end of the pages URL like so.. www.mywebsite.com/example-page.php#jumpnavlink Is this creating a problem for duplicate content?
Technical SEO | | SeoSheikh
Is it creating a new URL for viewers to use?
Is it ok to have lots of these running throughout my sites content pages? Many thanks for any light that is shed on this one! Cheers
Alex0 -
Location Based Content / Googlebot
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
Technical SEO | | Allstar0 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0