Ever Wise to Intentionally Use Javascript for Global Navigation?
-
I may be going against the grain here, but I'm going to throw this out there and I'm interested in hearing your feedback...
We are a fairly large online retailer (50k+ SKUs) where all of our category and subcategory pages show well over 100 links (just the refinement links on the left can quickly add up to 50+). What's worse is when you hover on our global navigation, you see the hover menu (bot sees them as
-
) of over 80 links.
Now I realize the good rule of thumb is not to exceed 100 links on a page (and if you did your math, you can see we already exceeded that well before we let the bots get to the good stuff we really wanted them to crawl in the first place).
So...
Is it wise to intentionally shield these global nav links from the bots by using javascript?
-
-
I had this same conversation with someone yesterday about a very similar set-up.
In a 2009 blog post, Matt Cutts said that the main reason not to include over 100 links was for user experience. It used to be for technical reasons, but those no longer apply. Here is the post: http://www.mattcutts.com/blog/how-many-links-per-page/
Lots of links can lead to lots of code, which slows things down. It will also be dividing up the page rank fairly heavily. However in the age of mega-menus I don't think that the number of links is, in itself, a problem.
Just for reference (and the answers to your situation may be different) our conversation about this ended with the decision to reduce the number slightly - structuring to leak less page rank to unimportant pages. However overall we still have a LOT of links and are happy with that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve JavaScript paginated content for SEO
In our blog listings page, we limit the number of blogs that can be seen on the page to 10. However, all of the blogs are loaded in the html of the page and page links are added to the bottom. Example page: https://tulanehealthcare.com/about/newsroom/ When a user clicks the next page, it simply filters the content on the same page for the next group of postings and displays these to the user. Nothing in the html or URL change. This is all done via JavaScript. So the question is, does Google consider this hidden content because all listings are in the html but the listings on page are limited to only a handful of them? Or is Googlebot smart enough to know that the content is being filtered by JavaScript pagination? If this is indeed a problem we have 2 possible solutions: not building the HTML for the next pages until you click on the 'next' page. adding parameters to the URL to show the content has changed. Any other solutions that would be better for SEO?
Intermediate & Advanced SEO | | MJTrevens1 -
Ranking without use of keywords on page & without use of matching anchor text??
Howdy folks. So, here is a dilemma. One of competitors of ours is somehow ranking for a keyphrase "houston chronicle obituaries" without any usage of these keywords on the page, without any full or partial anchor text match ("chronicle" is not used anywhere). The rest of competitiors' rankings make sense. Any ideas?
Intermediate & Advanced SEO | | DmitriiK0 -
Has anyone used Wildshark?
Just stumbled across a company called Wildshark SEO (http://www.wildshark.co.uk). Has anyone used them? Any feedback? Please and thank you!
Intermediate & Advanced SEO | | seoman100 -
Canonicle & rel=NOINDEX used on the same page?
I have a real estate company: www.company.com with approximately 400 agents. When an agent gets hired we allow them to pick a URL which we then register and manage. For example: www.AGENT1.com We then take this agent domain and 301 redirect it to a subdomain of our main site. For example
Intermediate & Advanced SEO | | EasyStreet
Agent1.com 301’s to agent1.company.com We have each page on the agent subdomain canonicled back to the corresponding page on www.company.com
For example: agent1.company.com canonicles to www.company.com What happened is that google indexed many URLS on the subdomains, and it seemed like Google ignored the canonical in many cases. Although these URLS were being crawled and indexed by google, I never noticed any of them rank in the results. My theory is that Google crawled the subdomain first, indexed the page, and then later Google crawled the main URL. At that point in time, the two pages actually looked quite different from one another so Google did not recognize/honor the canonical. For example:
Agent1.company.com/category1 gets crawled on day 1
Company.com/category1 gets crawled 5 days later The content (recently listed properties for sale) on these category pages changes every day. If Google crawled the pages (both the subdomain and the main domain) on the same day, the content on the subdomain and the main domain would look identical. If the urls are crawled on different days, the content will not match. We had some major issues (duplicate content and site speed) on our www.company.com site that needed immediate attention. We knew we had an issue with the agent subdomains and decided to block the crawling of the subdomains in the robot.txt file until we got the main site “fixed”. We have seen a small decrease in organic traffic from google to our main site since blocking the crawling of the subdomains. Whereas with Bing our traffic has dropped almost 80%. After a couple months, we have now got our main site mostly “fixed” and I want to figure out how to handle the subdomains in order to regain the lost organic traffic. My theory is that these subdomains have a some link juice that is basically being wasted with the implementation of the robots.txt file on the subdomains. Here is my question
If we put a ROBOTS rel=NOINDEX on all pages of the subdomains and leave the canonical (to the corresponding page of the company site) in place on each of those pages, will link juice flow to the canonical version? Basically I want the link juice from the subdomains to pass to our main site but do not want the pages to be competing for a spot in the search results with our main site. Another thought I had was to place the NOIndex tag only on the category pages (the ones that seem to change every day) and leave it off the product (property detail pages, pages that rarely ever change). Thank you in advance for any insight.0 -
Use Nonindex or Canonical on product tags of a e-commerce site
I run a e-commerce site and we have many product tags. These product tags come up as "Duplicate Page Content" when Moz does it's crawl. I was wondering if I should use Nonindex or Canonical? The tags all go to the same product when used so I figure I would just nonindex them but was wondering what's the best for SEO?
Intermediate & Advanced SEO | | EmmettButler1 -
Block in robots.txt instead of using canonical?
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
Intermediate & Advanced SEO | | YairSpolter0 -
Techniques to fix eCommerce faceted navigation
Hi everyone, I've read a lot about different techniques to fix duplicate content problems caused by eCommerce faceted navigation (e.g. redundant URL combinations of colors, sizes, etc.). From what I've seen suggested methods include using AJAX or JavaScript to make the links functional for users only and prevent bots from crawling through them. I was wondering if this technique would work instead? If we detect that the user is a robot, instead of displaying a link, we simply display its anchor text. So what would be for a human COLOR < li > < a href = red >red < /a > < /li >
Intermediate & Advanced SEO | | anthematic
< li > < a href = blue>blue < /a > < /li > Would be for a robot COLOR < li > red < /li >
< li > blue < /li > Any reason I shouldn't do this? Thanks! *** edit Another reason to fix this is crawl budget since robots can waste their time going through every possible combination of facet. This is also something I'm looking to fix.0