Google's Stance on "Hidden" Content
-
Hi,
I'm aware Google doesn't care if you have helpful content you can hide/unhide by user interaction. I am also aware that Google frowns upon hiding content from the user for SEO purposes. We're not considering anything similar to this.
The issue is, we will be displaying only a part of our content to the user at a time.
We'll load 3 results on each page initially. These first 3 results are static, meaning on each initial page load/refresh, the same 3 results will display. However, we'll have a "Show Next 3" button which replaces the initial results with the next 3 results. This content will be preloaded in the source code so Google will know about it.
I feel like Google shouldn't have an issue with this since we're allowing the user action to cycle through all results. But I'm curious, is it an issue that the user action does NOT allow them to see all results on the page at once?
I am leaning towards no, this doesn't matter, but would like some input if possible. Thanks a lot!
-
I don't think you're looking at a penalty situation, if that's what you are asking. Seems perfectly legitimate.
The more interesting question to me is how Google will "weigh" the hidden content in it's algorithm. I suspect that anything that is hidden by javascript (or another method) will hold less weight than text in plain sight. You could try Google's new "Fetch and Render" tool in Webmaster Tools to see how Google views the page. Anything that doesn't display might not get as much consideration as plain text.
Of course, this is a lot of speculation. We don't really know for sure how Google treats text like this, but it's a pretty common situation.
-
Anyone else want to take a crack at this?
-
Hi Alrockn,
I'm not sure you understood the question. Thank you for reading.
-
Sounds similar to paginate issues, and the potential to create duplicate content in the eye's of google, particularly if you're using a template. Not a serious issue if this occurs for 1 or 2 clicks, but if viewer are going to do this for 5 or more times after the initial landing page, it might be a problem with the meta-tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long will old pages stay in Google's cache index. We have a new site that is two months old but we are seeing old pages even though we used 301 redirects.
Two months ago we launched a new website (same domain) and implemented 301 re-directs for all of the pages. Two months later we are still seeing old pages in Google's cache index. So how long should I tell the client this should take for them all to be removed in search?
Intermediate & Advanced SEO | | Liamis0 -
Google Search Console "Change of Address" - Failed Redirection Test
I have a client who has a lot of domain variations, which have all been set up in Google Search Console. I requested that the client use the COA feature in GSC for the domains that are now redirecting to other domains that they own (which are set up in GSC). The problem is that we're not redirecting the homepages to the homepages of the destination domains. So, GSC is giving us this error message: fails redirection test: The old site redirects to www.domain.com/blog, which does not correspond to the new site you chose. Is our only way to use GSC COA for these domains to change the homepage redirect to go to the homepage of the destination domain? We don't really want that since the domain we're redirecting is a "blog.domain1.com" subdomain and we want to redirect it to "domain2.com/blog". Any help appreciated! Thanks,
Intermediate & Advanced SEO | | kernmedia
Dan0 -
Google WMT/search console showing thousands of links in "Internal Links"
Hi, One of our blog-post has been interlinked with thousands of internal links as per search console; but lists only 2 links it got connected from. How come so many links it got connected internally? I don't see any. Thanks, Satish
Intermediate & Advanced SEO | | vtmoz0 -
Is there a tool to find out if a URL has been deemed "SPAM" by GOOGLE
I am currently doing a link audit on one of my sites and I am coming across some links that appear to be spam. Is there a tool that I can plug their URL into to see if they have been deemed spam by GOOGLE?
Intermediate & Advanced SEO | | Mozd0 -
Rel="canonical" and rel="alternate" both necessary?
We are fighting some duplicate content issues across multiple domains. We have a few magento stores that have different country codes. For example: domain.com and domain.ca, domain.com is the "main" domain. We have set up different rel="alternative codes like: The question is, do we need to add custom rel="canonical" tags to domain.ca that points to domain.com? For example for domain.ca/product.html to point to: Also how far does rel="canonical" follow? For example if we have:
Intermediate & Advanced SEO | | AlliedComputer
domain.ca/sub/product.html canonical to domain.com/sub/product.html
then,
domain.com/sub/product.html canonical to domain.com/product.html0 -
Can I, in Google's good graces, check for Googlebot to turn on/off tracking parameters in URLs?
Basically, we use a number of parameters in our URLs for event tracking. Google could be crawling an infinite number of these URLs. I'm already using the canonical tag to point at the non-tracking versions of those URLs....that doesn't stop the crawling tho. I want to know if I can do conditional 301s or just detect the user agent as a way to know when to NOT append those parameters. Just trying to follow their guidelines about allowing bots to crawl w/out things like sessionID...but they don't tell you HOW to do this. Thanks!
Intermediate & Advanced SEO | | KenShafer0 -
Is it possible to "undo" canonical tags as unique content is created?
We will soon be launching an education site that teaches people how to drive (not really the topic, but it will do). We plan on being content rich and have plans to expand into several "schools" of driving. Currently, content falls into a number of categories, for example rules of the road, shifting gears, safety, etc. We are going to group content into general categories that apply broadly, and then into "schools" where the content is meant to be consumed in a specific order. So, for example, some URLs in general categories may be: drivingschool.com/safety drivingschool.com/rules-of-the-road drivingschool.com/shifting-gears etc. Then, schools will be available for specific types of vehicles. For example, drivingschool.com/cars drivingschool.com/motorbikes etc. We will provide lessons at the school level, and in the general categories. This is where it gets tricky. If people are looking for general content, then we want them to find pages in the general categories (for example, drivingschool.com/rules-of-the-road/traffic-signs). However, we have very similar content within each of the schools (for example, drivingschool.com/motorbikes/rules-of-the-road/traffic-signs). As you could imagine, sometimes the content is very unique between the various schools and the general category (such as in shifting), but often it is very similar or even nearly duplicate (as in the example above). The problem is that in the schools we want to say at the end of the lesson, "after this lesson, take the next lesson about speed limits for motorcycles" so there is a very logical click-path through the school. Unfortunately this creates potential duplicate content issues. The best solution I've come up with is to include a canonical tag (pointing to the general version of the page) whenever there is content that is virtually identical. There will be cases though where we adjust the content "down the road" 🙂 to be more unique and more specific for the school. At that time we'd want to remove the canonical tag. So two questions: Does anyone have any better ideas of how to handle this duplicate content? If we implement canonical tags now, and in 6 months update content to be more school-specific, will "undoing" the canonical tag (and even adding a self-referential tag) work for SEO? I really hope someone has some insight into this! Many thanks (in advance).
Intermediate & Advanced SEO | | JessicaB0 -
Does google can read the content of one Iframe and use it for the pagerank?
Beginners doubt: When one website has its content inside Iframe's, google will read it and consider for the pagerank?
Intermediate & Advanced SEO | | Naghirniac0