Can Google read content that is hidden under a "Read More" area?
-
For example, when a person first lands on a given page, they see a collapsed paragraph but if they want to gather more information they press the "read more" and it expands to reveal the full paragraph. Does Google crawl the full paragraph or just the shortened version?
In the same vein, what if you have a text box that contains three different tabs. For example, you're selling a product that has a text box with overview, instructions & ingredients tabs all housed under the same URL. Does Google crawl all three tabs?
Thanks for your insight!
-
Yes, for the most part. Google wants to deliver the best results for visitors based on their search query. So if something is hidden from initial view this would impact ux and especially if it's poorly implemented (not intuitive). As you know, original and compelling copy is the best. Unfortunately in many situations, such as a large ecommerce site, it is resource intensive. It's best to avoid thin content. However, it does get ranked as you can grab a snippet and place in Google and look at the results. So yes, it's possible that Google will rank these pages with duplicate content in a hidden view.
I would advise you to tell your client to remove any hidden content and rewrite product descriptions. Depending on resources, they may/may not want to do this. If they don't, at least you made a recommendation. Good luck!
-
Ok, that makes sense. And can that be applied to a text box with tabs?
Follow up to that - the situation is that I have a client hat doesn't have a lot of "original" content on their e-commerce page. It sounds like Google will take into account that content as "original" content but won't necessarily used it to build relevancy for any keywords hidden within. Is that correct?
-
I agree with Kevin in the answer above, the content may be crawled (depending on how you have hidden the paragraph using HTML) but Google may not give the right advantage of the content available after clicking the link.
We have a client with FAQ section with similar situation https://www.fairsplit.com/faqs/ , the website gets authority for the Question Titles of the FAQ section and not for the content as answer available after clicking the question.
I hope this helps, let me know if you have further questions.
Regards,
Vijay
-
The Googlebot will crawl this information. However, Google may elect not to index it or discount this content in its rankings.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Community Discussion - What's the ROI of "pruning" content from your ecommerce site?
Happy Friday, everyone! 🙂 This week's Community Discussion comes from Monday's blog post by Everett Sizemore. Everett suggests that pruning underperforming product pages and other content from your ecommerce site can provide the greatest ROI a larger site can get in 2016. Do you agree or disagree? While the "pruning" tactic here is suggested for ecommerce and for larger sites, do you think you could implement a similar protocol on your own site with positive results? What would you change? What would you test?
Intermediate & Advanced SEO | | MattRoney2 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
User generated content - manual warning from Google
Over the weekend our website received large amounts of spammy comments / user profiles on our forums. This has led to Google giving us a partial manual action until we clear things up. So far we have: Cleared up all the spam, banned the offending user accounts, and temporary enabled admin-approval for new sign ups. We are currently investigating upgrading the forum software to the latest version in order to make the forums less susceptible to this kind of attack. Could anyone let me know whether they think it is the right time for us to submit a reconsideration request to get the manual action removed? Will the temporary actions we have taken be enough to get the ban lifted, or should we wait until the forum software has been updated? I'd really appreciate any advice, especially if there is anyone here who has experienced this issue themselves 🙂
Intermediate & Advanced SEO | | RG_SEO0 -
"No Index, No Follow" or No Index, Follow" for URLs with Thin Content?
Greetings MOZ community: If I have a site with about 200 thin content pages that I want Google to remove from their index, should I set them to "No Index, No Follow" or to "No Index, Follow"? My SEO firm has advised me to set them to "No Index, Follow" but on a recent MOZ help forum post someone suggested "No Index, No Follow". The MOZ poster said that telling Google the content was should not be indexed but the links should be followed was inconstant and could get me into trouble. This make a lot of sense. What is proper form? As background, I think I have recently been hit with a Panda 4.0 penalty for thin content. I have several hundred URLs with less than 50 words and want them de-indexed. My site is a commercial real estate site and the listings apparently have too little content. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
What is a "good" dwell time?
I know there isn't any official documentation from Google about exact number of seconds a user should spend on a site, but does anyone have any case studies that looks at what might be a good "dwell time" to shoot for? We're looking on integrating an exact time on site into or Google Analytics metrics to count as a 'non-bounce'--so, for example, if a user spends 45 seconds on an article, then, we wouldn't count it as a bounce, since the reader likely read through all the content.
Intermediate & Advanced SEO | | nicole.healthline0 -
Change of URLs: "little by little" VS "all at once"
Hi guys, We're planning to change our URLs structure for our product pages (to make them more SEO friendly) and it's obviously something very sensitive regarding the 301 redirections that we have to take with... I'm having a doubt about Mister Google: if we slowly do that modification (area by area, to minimize the risk of problems in case of bad 301 redirection), would we lose rankings in the search engine? (I'm wondering if they might consider our website is not "coherent" -> not the same product page URLs structure for all the product pages during some time) Thanks for your kind opinion 😉
Intermediate & Advanced SEO | | Kuantokusta0 -
Does Google read texts when display=none?
Hi, In our e-commerce site on category pages we have pagination (i.e toshiba laptops page 1, page 2 etc.). We implement it with rel='next' and 'prev' etc. On the first page of each category we display a header with lots of text information. This header is removed on the following pages using display='none'. I wondered if since it is only a css display game google might still read it and consider duplicated content. Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Why my site is "STILL" violating the Google quality guidelines?
Hello, I had a site with two topics: Fashion & Technology. Due to the Panda Update I decided to change some things and one of those things was the separation of these two topics. So, on June 21, I redirected (301) all the Fashion pages to a new domain. The new domain performed well the first three days, but the rankings dropped later. Now, even the site doesn't rank for its own name. So, I thought the website was penalized for any reason, and I sent a reconsideration to Google. In fact, five days later, Google confirmed that my site is "still violating the quality guidelines". I don't understand. My original site was never penalized and the content is the same. And now when it is installed on the new domain becomes penalized just a few days later? Is this penalization only a sandbox for the new domain? Or just until the old URLs disappear from the index (due to the 301 redirect)? Maybe Google thinks my new site is duplicating my old site? Or just is a temporal prevention with new domains after a redirection in order to avoid spammers? Maybe this is not a real penalization and I only need a little patience? Or do you think my site is really violating the quality guidelines? (The domain is http://www.newclothing.co/) The original domain where the fashion section was installed before is http://www.myddnetwork.com/ (As you can see it is now a tech blog without fashion sections) The 301 redirect are working well. One example of redirected URLs: http://www.myddnetwork.com/clothing-shoes-accessories/ (this is the homepage, but each page was redirected to its corresponding URL in the new domain). I appreciate any advice. Basically my fashion pages have dropped totally. Both, the new and old URLs are not ranking. 😞
Intermediate & Advanced SEO | | omarinho0