CSS Hidden DIVs - not collapsable content. Amber light?
-
I'm in the planning stage of a new ecommerce page. To reduce duplication issues, my page will be static with 20% of the page compiled of dynamic fields.
So when a user selects a size, or color, the dynamic fields are the only ones that change as the rest of the content is the same. I can keep a static URL and not worry about duplication issues. Focus can be on strengthening this single URL with rich schema, reviews, and backlinks.
We're going to cache a default page so for crawlers, the dynamic field doesn't appear empty. My developer said they can cache the page with all the variants of the dynamic fields, and use hidden DIVs to hide them from the user.
This way, the load speed can be high, and search engines might crawl those keywords too. I'm thinking about and going.."wait a minute, that's a good idea..but would a search engine think I am hidding content and give me a penalty?". The hidden content is relevant to the page and it only appears according to the drop down to make the user experience more "friendly".
What do you think? Use hidden DIV or use javascript to not allow bots to crawl the hidden data at all?
-
If its relevant, done with usability in mind, and is not deceptive then it should be fine.
Here's a related Article from Search Engine Roundtable with Matt Cutts video:
http://www.seroundtable.com/google-hiding-content-17136.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content change and variations in ranking
Hello, I have create a new webpage and asked google in the webmaster tool to crawl it. Within minutes it is ranked at a certain spot. I did make changes to it to increase the ranking and right away I could see variations in ranking either up or down ? I have done the same same thing for a page that has been existing on my website for many years. I changed the content, asked the webmaster tool to re-crawl it. It got the new content within minutes but the ranking doesn't seem to change. Maybe my content isn't good enough but I doubt. Could it be that on old pages it takes a couple weeks to see ranking changes whereas on new page it is instantaneous. Has anyone experienced something similar ? Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
No content in the view source, why?
Hi I have a website that you don't see the article body in the view source but if you use the inspect element tool you can see the content, do you know why? Thanks Roy
Intermediate & Advanced SEO | | kadut0 -
Duplicate content reported on WMT for 301 redirected content
We had to 301 redirect a large number of URL's. Not Google WMT is telling me that we are having tons of duplicate page titles. When I looked into the specific URL's I realized that Google is listing an old URL's and the 301 redirected new URL as the source of the duplicate content. I confirmed the 301 redirect by using a server header tool to check the correct implementation of the 301 redirect from the old to the new URL. Question: Why is Google Webmaster Tool reporting duplicated content for these pages?
Intermediate & Advanced SEO | | SEOAccount320 -
What are your thoughts on Content Automation?
Hi, I want to ask forum members’ opinion on content automation. And before I raise the eyebrows of many of you with this question, I’d like to state I am creating content and doing SEO for my own website so I’m not looking to cut corners with spammy tactics that could hurt my website from an organic search perspective. The goal is to automate pages in the areas of headings, Meta Titles, Meta Descriptions, and perhaps a paragraph of content. More importantly, I’d like these pages to add value to the users experience so the question is…. How do I go about automating the pages, and more specifically, how is meta title, meta descriptions etc. automated? I’d also like to hear from people that recommend steering clear of any form of content automation. I hope my question isn’t too bit vague and I look forward to hearing from other Mozzers. Regards, Russell in South Africa
Intermediate & Advanced SEO | | Shamima0 -
Uppercase in URLs = Dupe Content
Hi Mozzers, My developers recently changed a bunch of the pages I am working on into all lower case (something I know ideally should have been done in the first place). The URLs have sat for about a week as lower case without 301 redirecting the old upper-case URLs to these pages. In Google Webmaster Tools, I'm seeing Google recognize them as duplicate meta tags, title tags, etc. See image: http://screencast.com/t/KloiZMKOYfa We're 301 redirecting the old URLs to the new ones ASAP, but is there anything else I should do? Any chance Google is going to noindex these pages because it seems them as dupes until I fix them? Sometimes I can see both pages in the SERPs if I use personalized results, and it scares me: http://screencast.com/t/4BL6iOhz4py3 Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Duplicate content resulting from js redirect?
I recently created a cname (e.g. m.client-site .com) and added some js (supplied by mobile site vendor to the head which is designed to detect if the user agent is a mobi device or not. This is part of the js: var CurrentUrl = location.href var noredirect = document.location.search; if (noredirect.indexOf("no_redirect=true") < 0){ if ((navigator.userAgent.match(/(iPhone|iPod|BlackBerry|Android.*Mobile|webOS|Window Now... Webmaster Tools is indicating 2 url versions for each page on the site - for example: 1.) /content-page.html 2.) /content-page.html?no_redirect=true and resulting in duplicate page titles and meta descriptions. I am not quite adept enough at either js or htaccess to really grasp what's going on here... so an explanation of why this is occurring and how to deal with it would be appreciated!
Intermediate & Advanced SEO | | SCW0 -
Best practices for handling https content?
Hi Mozzers - I'm having an issue with https content on my site that I need help with. Basically we have some pages that are meant to be secured, cart pages, auth pages, etc, and then we have the rest of the site that isn't secured. I need those pages to load correctly and independently of one another so that we are using both protocols correctly. Problem is - when a secure page is rendered the resources behind it (scripts, etc) won't load with the unsecured paths that are in our master page files currently. One solution would be to render the entire site in https only, however this really scares me from an SEO standpoint. I don't know if I want to put my eggs in that basket. Another solution is to structure the site so that secure pages are built differently from unsecured pages, but that requires a bit of re-structuring and new SOPs to be put in place. I guess my question is really about best practices when using https. How can I avoid duplication issues? When do I need to use rel=canonical? What is the best way to do things here to avoid heavy maintenance moving forward?
Intermediate & Advanced SEO | | CodyWheeler0 -
Which is more effective: JQuery + CSS for Tabbed Content or Create Unique Pages for each tab.
We are building a from-scratch directory site and trying to determine the best way to structure our pages. Each general listing page has four sections of specific information. What is a better strategy for SEO: Using tabs (e.g. JQuery + CSS) and putting all content on one page (and will all of the content still be indexible using JQuery?) OR creating unique pages for each section. JQuery: sitename.com/listing-name#section1 Unique Pages: sitename.com/listing-name/section1 If I go with option one, I can risk not being crawlable by google if they can't read through the scripting. However, I feel like the individual pages will not rank if there's a small amount of content for each section. Is it better to keep all the content on one page and focus on building links to that? Or better to build out the section pages and worry about adding quality content to them so that long term there is more specificity for long tail search and better quality search experience on Google? We are also set up to have "../listing-type/listing-name" but are considering removing 'listing type and just having "../listing-name/". Do you think this more advantageous for boosting rankings? I know that was like five questions. I've been doing a lot of research and these are the things that I'm still scratching my head about. Some general direction would be really great! Thank You!
Intermediate & Advanced SEO | | knowyourbank0