Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Content in Accordion doesn't rank as well as Content in Text box?
-
Does content rank better in a full view text layout, rather than in a clickable accordion?
I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true?
accordion example: see features: https://www.workday.com/en-us/applications/student.html
-
Google will not treat content that is concealed behind tabs, accordions, or any other element where JavaScript is used to reveal content, in the same way as content that is visible as standard. However, it will still be indexed, so pages may rank for search phrases related to content contained within the hidden sections.
Why does Google devalue hidden content?
Google’s focus is on ensuring that the user experience within its search results is as good as possible. If the algorithm gave full weight to content hidden using JavaScript, this could be compromised.
For example, say a user searches for a term that is matched on a page but only in the hidden section. The user then clicks the search result to go through to that page but can’t immediately see the information they’re looking for because it’s hidden. They give up and return to the search results or head to another website.
This, in Google’s assessment, would not be a high quality user experience and the content within the hidden sections is therefore down-weighted.
In Summary
- Hiding content within tabs, accordions, or other elements that rely on JavaScript to reveal it to users is likely to be treated differently by Google, and assigned far less importance
- Websites, therefore, must take a considered approach and use this method only to hide content that is of secondary importance to the primary topic of the page, or that covers related topics
-
Hi there,
Absolutely not. In fact, I believe content in accordions outranks content on a page, although not for technical reasons.
Accordions are easier to fit into a page and can answer multiple user inquiries at once without throwing a wall of text at your visitors as they browse. Google reads accordions just the same as it reads open text. The difference comes with user interactions, metrics and satisfaction metrics.
Think about it like this:
You are browsing for pricing of a product. You also want to know shipping details and whether said product is safe to use for your 4-year old.
Your search returns 2 companies in your area that provide said product.
The first website throws 3,000 words at you in blocks, requiring you to scroll for what feels like hours without a clear indication of where to find the answer to your questions.
The second website can be scrolled in about 2 seconds and features an accordion which features headlines and direct answers to your questions without the need to view other content. Now we're cooking with gas.
In addition, accordion content lends itself to direct-answer formats which in turn lend themselves to showcase on SERP's. So not only will rankings improve, but so will traffic (there are tons of studies showing that Top 10 rankings = traffic, but few people realize that meta data and snippets can improve your odds of trapping 1st page traffic better than positioning).
Over time, this website will generate more and more authority for this product and relevant search queries, overtaking the other.
To answer your question directly - Google treats both forms of content equally, but (all else being equal) user metrics will provide greater link building potential, greater readership, more shares, etc. for the one featuring an accordion setup.
Look forward to what others have to say on this,
Rob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Why can't google mobile friendly test access my website?
getting the following error when trying to use google mobile friendly tool: "page cannot be reached. This could be because the page is unavailable or blocked by robots.txt" I don't have anything blocked by robots.txt or robots tag. i also manage to render my pages on google search console's fetch and render....so what can be the reason that the tool can't access my website? Also...the mobile usability report on the search console works but reports very little, and the google speed test also doesnt work... Any ideas to what is the reason and how to fix this? LEARN MOREDetailsUser agentGooglebot smartphone
Technical SEO | | Nadav_W0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
How to find temporary redirects of existing site you don't control?
I am getting ready to move a clients site from another company. They have like 35 tempory redirects according to MOZ. Question is, how can I find out then current redirects so I can update everything for the new site? Do I need access to the current htaccess file to do this?
Technical SEO | | scott3150 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
404 error - but I can't find any broken links on the referrer pages
Hi, My crawl has diagnosed a client's site with eight 404 errors. In my CSV download of the crawl, I have checked the source code of the 'referrer' pages, but can't find where the link to the 404 error page is. Could there be another reason for getting 404 errors? Thanks for your help. Katharine.
Technical SEO | | PooleyK0 -
Sites Copying my Content Ranking Higher
A number of sites are copying - either 100% word for word, paragraphs, or sentences of my content and are ranking higher. Some sites are doing this with permission/properly and are linking back to my article Others are not linking back or giving credit. Some of these sites, in some cases are ranking higher than me in Google results. What can I do?
Technical SEO | | ben10000