How valuable is content "hidden" behind a JavaScript dropdown really?
-
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks?
You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess:
http://www.insider-boersenbrief.de/
http://www.deko-und-kerzenshop.de/
How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
-
Hey guys -
Good question here. You are right, JFKORN, that the scenario I described in my post was where content that should be accessible to Google was hidden behind Javascript. Of course, Google is now indexing Javascript and can parse it quite well, so I'm not sure it still holds true, but I still recommend, to be safe, to not serve content using Javascript.
It seems to me, though, that you are asking the opposite. But what they are doing here seems to be legit to me. In my mind, it is not any different from simply using a collapsible DIV to put tabs onto a page, like on this page: http://www.rei.com/product/812097/black-diamond-posiwire-quickpack-quickdraw-set-package-of-6. I would actually say that it's fine to do this. But, be careful with the content because you do not want to get into "stuffing" the pages with keywords, which can hurt your rankings, even without an official penalty. I've seen this more as an assumed algorithmic penalty that then went away when the text was removed.
So be careful, but I don't think you'd be doing anything greyhat here.
-
Thank you for the reply. I checked the link you posted, good information there. The only thing I was thinking about: The scenario John described wasn't necessarily content hidden behind an accessible dropdown. I'm still wondering if this makes any difference to Google. Hiding content to users completely or giving them the choice to display it by clicking the dropdown button seems different to me. One could also do this using CSS, just like with CSS dropdown navigation. There wouldn't even have to be any JS involved. Seems all pretty grey-hat to me though.
-
UNANIMOUS. Dont do it. We had several sites we were working on, from an acquisition, that had it hidden and did some extensive research last month and got consistent feedback that it will be picked up by google.
This guys name is john doherty. He is an active contributor to seomoz and I have read some great seo articles from him.....in this one he gives an example of an seo audit and what to make sure you look for.....
http://www.johnfdoherty.com/seo-facepalms-dont-hide-content-behind-javascript/
Without any lack of clarity he tells you not to do it.....we got the same feedback from several other folks in seo at the agency level.
Good luck.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Quick Fix to "Duplicate page without canonical tag"?
When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something?
Technical SEO | | CREW-MARKETING1 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Database driven content producing false duplicate content errors
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues. Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to Rel="canonical" and I think I am just confused. Nick
Technical SEO | | nickcargill0 -
Why does my mobile site have a "?mobiRedirect=1" string at the end of the URL?
Hello, When trying to access my site from a smart-phone, I'm getting a redirected to the mobile version (which is correct), however at the end of the URL there is a redirect string that shows every time. I'm not sure why its its showing or how it automatically gets appended to the end of the URL each time. How can I configure my mobile site to prevent the ?mobiRedirect=1" from showing? For example, if you search for "Columbus Regional Health" on Google with a smart-phone, the first result should be for www.crh.org. If you click that, you should get redirected to www.crh.org/mobile , however its displaying the URL as http://www.crh.org/mobile/default.aspx?mobiRedirect=1 Does anyone know how to fix this? Thank you,
Technical SEO | | Liamis
Brian0 -
Sitemaps and "noindex" pages
Experimenting a little bit to recover from Panda and added "noindex" tag for quite a few pages. Obviously now we need Google to re-crawl them ASAP and de-index. Should we leave these pages in sitemaps (with updated "lastmod") for that? Or just patiently wait? 🙂 What's the common/best way?
Technical SEO | | LocalLocal0 -
Google inconsistent in display of meta content vs page content?
Our e-comm site includes more than 250 brand pages - lrg image, some fluffy text, maybe a video, links to categories for that brand, etc. In many cases, Google publishes our page title and description in their search results. However, in some cases, Google instead publishes our H1 and the aforementioned fluffy page content. We want our page content to read well, be descriptive of the brand and appropriate for the audience. We want our meta titles and descriptions brief and likely to attract CTR from qualified shoppers. I'm finding this difficult to manage when Google pulls from two different areas inconsistently. So my question... Is there a way to ensure Google only utilizes our title/desc for our listings?
Technical SEO | | websurfer0 -
Does google "see through" php/asp redirects?
A lot of the time I see companies employing a technique like this: <a target="_blank" href="/external/wcpages/referral.aspx?URL=http%253a%252f%252fwww.xxxx.ca&ReferralType=W&ProfileID=22&ListingID=96&CategoryID=219">xxxxxa> Or similarly with php. In an attempt to log all the clicks that exit their site from certain locations. When google bot comes along and crawls this page, does it still understand that this page links to www.xxxx.ca?
Technical SEO | | adriandg0 -
Duplicate Content Issue
Hi Everyone, I ran into a problem I didn't know I had (Thanks to the seomoz tool) regarding duplicate content. my site is oxford ms homes.net and when I built the site, the web developer used php to build it. After he was done I saw that the URL's looking like this "/blake_listings.php?page=0" and I wanted them like this "/blakes-listings" He changed them with no problem and he did the same with all 300 pages or so that I have on the site. I just found using the crawl diagnostics tool that I have like 3,000 duplicate content issues. Is there an easy fix to this at all or does he have to go in and 301 Redirect EVERY SINGLE URL? Thanks for any help you can give.
Technical SEO | | blake-766240