Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Using a lot of "Read More" Hidden text
-
My site has a LOT of "read more" and when a user click they will see a lot of text. "read more" is dark blue bold and clear to the user. It is the perfect for the user experience, since right below I have pictures and videos which is what most users want.
Question: I expect few users will click "Read more" (however, some users will appreciate chance to read and learn more) and I wonder if search engines may think I am hiding text and this is a risky approach or simply discount the text as having zero value from an SEO perspective?
Or, equally important: If the text was NOT hidden with a "Read more" would the text actually carry more SEO value than if it is hidden under a "read more" even though users will NOT read the text anyway? If yes, reason may be: when the text is not hidden, search engines cannot see that users are not reading it and the text carry more weight from an SEO perspective than pages where text is hidden under a "Read more" where users rarely click "read more".
-
Hi khi5
I analyzed your page. You are doing just fine. you are using CSS display none. You are not doing any cloaking.
You are doing the right thing.
1. not fooling google
2.not fooling user
3.giving the user a better user experience.
Don't worry you are not applying any "black hat" technique. You will not get penalized.
-
thx. Anirban. I am not a programmer, so would you be able to tell me if this approach seems right: http://www.honoluluhi5.com/oahu/honolulu-condos/ - I don't know if css or display none.
I can't think of a better layout for that page and hiding text the way I have done it is ideal for users. If I show more text, surely bounce rate would go up!
-
It used by many huge sites is to pre-load code, navigation, or content in the background so that it can be dynamically displayed as needed. The most common technique for accomplishing this is through the use of the CSS display: none attribute.
Unfortunately, you can also use display: none to simply hide text. This is where the perceived problem comes in. People worry that the use of display: none to hide content(and show when user asks for it) or for code that is really meant for screen readers can lead them into trouble. The legitimate use of this technique is so prevalent that I would rarely expect search engines to penalize a site for using the display: none attribute. It’s just very difficult to implement an algorithm that could truly ferret out whether the particular use of display: none is meant to deceive the search engines or not.
I usually use this tactics to make the page more user friendly and it is useful for the user too. User don't get bombarded by a large content piece and I am not fooling the user/google. I am giving the option to the user to read more if he wants to.
"display: none"
What it does :- the functionality is same - when user clicks "read more" it opens and when user click "less" it closes.
How it defeats the "cloaking" idea:- When google crawls your page where the full content is there (text based browser, not java enabled) and when user sees the page there is a "read more" link and by clicking it it shows the full content. So you are not showing two different things to google & user. it solves the problem.there shouldn't be a a cloaking problem. Its tested.
Hope this helps...
Also refer :- http://moz.com/community/q/would-using-display-none-to-hide-a-section-of-text-effect-seo-negatively
-
Wow -- thanks for the links! I learn something new every day.
I'll defer to others on your specific question since I haven't ever worked with sites that specifically do what you do. I hope someone will give you a good answer! -
http://searchengineland.com/googles-matt-cutts-on-hidden-text-using-expandable-sections-youll-be-in-good-shape-167753 - this is a more Matt Cutts video and more relevant, which again mentions it is OK to use those read more.
Again, my bigger concern is if it is OK, or I am probably safer off showing all text if possible….
-
thx, Sam. Here is a video from Matt Cutts: https://www.youtube.com/watch?v=UpK1VGJN4XY - it appears Google is OK with hidden text that makes sense for user.
For my site I have a lot of read more types like here:
http://www.honoluluhi5.com/oahu-condos/
http://www.honoluluhi5.com/oahu/honolulu-city-real-estate/As you can see from those 2 links, I have created with only the user in mind and nothing else. In order to play it safe, maybe I should just show all the text somehow, even though it compromises user experience.
-
The answer to your question lies in another question: Do search engines see one thing and users see another? If the answer is "yes," then you are using "cloaking" -- which is a very bad black-hat SEO technique. It can get you penalized and possibly banned.
Users don't see the text if they don't click "read more" but search engines will see the text either way? That's cloaking. I'd stop doing this right away.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does redirecting from a "bad" domain "infect" the new domain?
Hi all, So a complicated question that requires a little background. I bought unseenjapan.com to serve as a legitimate news site about a year ago. Social media and content growth has been good. Unfortunately, one thing I didn't realize when I bought this domain was that it used to be a porn site. I've managed to muck out some of the damage already - primarily, I got major vendors like Macafee and OpenDNS to remove the "porn" categorization, which has unblocked the site at most schools & locations w/ public wifi. The sticky bit, however, is Google. Google has the domain filtered under SafeSearch, which means we're losing - and will continue to lose - a ton of organic traffic. I'm trying to figure out how to deal with this, and appeal the decision. Unfortunately, Google's Reconsideration Request form currently doesn't work unless your site has an existing manual action against it (mine does not). I've also heard such requests, even if I did figure out how to make them, often just get ignored for months on end. Now, I have a back up plan. I've registered unseen-japan.com, and I could just move my domain over to the new domain if I can't get this issue resolved. It would allow me to be on a domain with a clean history while not having to change my brand. But if I do that, and I set up 301 redirects from the former domain, will it simply cause the new domain to be perceived as an "adult" domain by Google? I.e., will the former URL's bad reputation carry over to the new one? I haven't made a decision one way or the other yet, so any insights are appreciated.
Intermediate & Advanced SEO | | gaiaslastlaugh0 -
What IP Address does Googlebot use to read your site when coming from an external backlink?
Hi All, I'm trying to find more information on what IP address Googlebot would use when arriving to crawl your site from an external backlink. I'm under the impression Googlebot uses international signals to determine the best IP address to use when crawling (US / non-US) and then carries on with that IP when it arrives to your website? E.g. - Googlebot finds www.example.co.uk. Due to the ccTLD, it decides to crawl the site with a UK IP address rather than a US one. As it crawls this UK site, it finds a subdirectory backlink to your website and continues to crawl your website with the aforementioned UK IP address. Is this a correct assumption, or does Googlebot look at altering the IP address as it enters a backlink / new domain? Also, are ccTLDs the main signals to determine the possibility of Google switching to an international IP address to crawl, rather than the standard US one? Am I right in saying that hreflang tags don't apply here at all, as their purpose is to be used in SERPS and helping Google to determine which page to serve to users based on their IP etc. If anyone has any insight this would be great.
Intermediate & Advanced SEO | | MattBassos0 -
SEO on Jobs sites: how to deal with expired listings with "Google for Jobs" around
Dear community, When dealing with expired job offers on jobs sites from a SEO perspective, most practitioners recommend to implement 301 redirects to category pages in order to keep the positive ranking signals of incoming links. Is it necessary to rethink this recommendation with "Google for Jobs" is around? Google's recommendations on how to handle expired job postings does not include 301 redirects. "To remove a job posting that is no longer available: Remove the job posting from your sitemap. Do one of the following: Note: Do NOT just add a message to the page indicating that the job has expired without also doing one of the following actions to remove the job posting from your sitemap. Remove the JobPosting markup from the page. Remove the page entirely (so that requesting it returns a 404 status code). Add a noindex meta tag to the page." Will implementing 301 redirects the chances to appear in "Google for Jobs"? What do you think?
Intermediate & Advanced SEO | | grnjbs07175 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Should pages with rel="canonical" be put in a sitemap?
I am working on an ecommerce site and I am going to add different views to the category pages. The views will all have different urls so I would like to add the rel="canonical" tag to them. Should I still add these pages to the sitemap?
Intermediate & Advanced SEO | | EcommerceSite0 -
When is it recommended to use a self referencing rel "canonical"?
In what type of a situation is it the best type of practice to use a self referencing rel "canonical" tag? Are there particular practices to be cautious of when using a self referencing rel "canonical" tag? I see this practice used mainly with larger websites but I can't find any information that really explains when is a good time to make use of this practice for SEO purposes. Appreciate all feedback. Thank you in advance.
Intermediate & Advanced SEO | | SEO_Promenade0 -
Does Google read texts when display=none?
Hi, In our e-commerce site on category pages we have pagination (i.e toshiba laptops page 1, page 2 etc.). We implement it with rel='next' and 'prev' etc. On the first page of each category we display a header with lots of text information. This header is removed on the following pages using display='none'. I wondered if since it is only a css display game google might still read it and consider duplicated content. Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
How long is it safe to use a 302 redirect?
Hi All, Lets assume there is site A and site B, both sites are live on the internet today as standalone businesses, but they sell very similar products. Site B has built up some link equity and will eventually become the domain for site A due to an organisational re-brand. For the time being however site A will remain, but site B needs to disappear temporarily, but not lose the link equity which has been built up against it. My current thinking is to 302 redirect site B to site A such that users and search bots accessing site B will be redirected to site A whilst leaving the link equity that exists against site B fully intact and allowing us to continue to grow it should we wish to. The question is, does anybody have a view on how long it is safe to use a 302 temporary redirect for? i.e., is 8-10 months to long. Thanks, Ben
Intermediate & Advanced SEO | | BenRush0