Should we always avoid drop-down menus?
-
In Google's SEO Guide, they say avoid the use of drop-down menus, page 12: http://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en/us/webmasters/docs/search-engine-optimization-starter-guide.pdf
But, is this always true? What if you create the drop down purely using HTML & CSS? Is it fine to use a bit of javascript to create the drop-down menu, or should it only be HTML & CSS?
-
Ensure you add a sitemap and you should be fine.
-
Agreed with Simon. Look at tons of huge online retailers like Zappos. Just needs to be done right and you're fine.
-
Hi Michelle
Drop-down menus are usually fine for SEO, so long as the navigational links within them are text links that search engine spiders can crawl and follow.
HTML and CSS are usually the preferred choice, JavaScript can sometimes be troublesome for bots, though certainly has it's useful place in web design. So long as the links within the menus are text links, then it will be fine.
Could always run them through a spider simulator to make sure.
Regards
Simon
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ranking drop for "Mobile" devices category in Google webmaster tools
Hi, Our rank dropped and we noticed it's a major drop in "Mobile" devices category, which is contributing to the overall drop. What exactly drops mobile rankings? We do not have any messages in search console. We have made few redirects and removed footer links. How these affect? Thanks,
Intermediate & Advanced SEO | | vtmoz
Satish0 -
A specific keyword has dropped from #1 in Google to nowhere at all...
Hi guys, I hope you can help. We have a large ecommerce website which has different domains for each language - GB, USA, DE, AU & CA. I've been working my way through the errors that have been flagged in Moz and today I noticed something quite worrying. One of our strong keywords has dropped from 1st place to nowhere at all. However, the Canadian version is ranking in the UK desktop search and our mobile site is appearing in the desktop search too. The keyword is 'personalised macbook cover' and the page in question is https://www.mrnutcase.com/en-GB/personalised-macbook-cover/ I'm confused as this page was ranking brilliantly a couple of weeks ago and now it's nowhere to be seen. We've added alternate and canonical tags to distinguish which site is mobile and which site is desktop. We've also submitted a sitemap so that it takes into account all of the languages. There are no harmful links and we've changed the content of the page across each language. I've attached what appears in the SERP's in the UK. (there is the mobile version ranking at the top and the Canadian version further down the page) Any help or advice would be greatly appreciated! Thanks, Danny webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=personalised%20macbook%20covers
Intermediate & Advanced SEO | | DannyNutcase0 -
Traffic dropped suddenly
-In early January 2013, we had to switch servers after many years with the same one. We were highly ranked and getting about 8500 unique visitors per month. -We didn't notice the traffic falling because we were focussed on a major site redesign and addition that we launched in April 2013. Visits continued to fall, this time also because the company that launched it didn't double check their work and had some dead links etc. Those were all fixed by approximately June 2013.- early January 2014 we switched servers again because we were afraid the new server we moved to was perhaps ranked poorly or was possibly a spamming site before. Currently, nothing has changed. What was about 8500 unique visitors per month 18 months ago, is now about 1,000 and no leads are coming in at all.
Intermediate & Advanced SEO | | HasitR0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Serp and visitor is dropping everyday, Any suggestion?
Hello, My site was getting 1500-2500 visitor and the serp rank was very good, position 2nd to 4th. But now rank is dropping day by day and also the visitor. Its my present serp http://goo.gl/vHB6o and my site URL is secretlovemessages.com Can anyone suggest me whats am i doing wrong and what should i do to get a concreate serp and steady visitor. Thanks to all
Intermediate & Advanced SEO | | purplerimon0 -
Traffic drop off and page isn't indexed
In the last couple weeks my impressiona and clicks have dropped off to about half what it used to be. I am wondering if Google is punishing me for something... I also added two new pages to my site in the first week of June and they still aren't indexed. In the past it seemed like new pages would be indexed in a couple days. Is there any way to tell if Google is unhappy with my site? WMT shows 3 server errors, 3 Access denied, and 122 not found errors. Could those not found pages be killing me? Thanks for any advise, Greg www.AntiqueBanknotes.com
Intermediate & Advanced SEO | | Banknotes0 -
Drop in Traffic on Friday April 20th.
Just curious if anyone noticed a drop in traffic last friday. I got hammered with about a 20% drop overall. Didn't know if there was an update or what. Thanks in advance!
Intermediate & Advanced SEO | | astahl110 -
Should subdomains be avoided for brand new websites?
When creating a brand new website, will setting it up as a subdomain provide ranking benefits? I understand that if it's an existing domain, it's better to use a subfolder because a subdomain is treated as a different domain. But is there any reason not to start a website with the keyword in the subdomain? For example: keyword.domain.com The SERP's are dominated by websites which contain some variation of the head term, but the disadvantage of doing a similar this is your website looks very similar. Thanks!
Intermediate & Advanced SEO | | JonDavies540