"Too many links" - PageRank question
-
This question seems to come up a lot.
70 flat page site. For ease of navigation, I want to link every page to one-another.
Pure CSS Dropdown menu with categories - each expanding to each of the subpage. Made, implemented, remade smartphone friendly. Hurray.
I thought this was an SEO principle - ensuring good site navigation and good internal linking. Not forcing your users to hit "back". Not forcing your users to jump through hoops.
But unless I've misread http://www.seomoz.org/blog/how-many-links-is-too-many then this is something that's indirectly penalised by Google because a site with 70 links from its homepage only lets each sub-page inherit 1/80th of its PageRank.
Good site navigation vs your subpages are invisible on Google.
-
Think pretty much any Javascript menu would be obfuscated for Google..... but bit of a grey hat approach though. Been reading some interesting articles about pageRank sculpting and whether it's still possible.
-
James,
I'm with Mat. I believe in user experience. There is a way around the too many links, you need a developer that understand Jquery. Basically, the Too Many Links issue gets resolved that way- I have an insane amount of too many links- we know have less than the 100.
Chad
-
If that is what makes sense then do it.
Adding a second tier of structure would allow you to direct more rank to certain areas (tier 1 all get an equal share of rank, having more links in some categories than others would force more page rank towards those). However the overall effect of that is less overall rank reaching the bottom pages.
Personally I would go with user experience first. If linking to all 70 in the menu makes sense then do that. What is the point in even ranking a site that people don't want to use because it's a pain to navigate? Then use cross linking to add greater emphasis to those that you want to reinforce.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to answer for question "why xyz site is ranking for abc keyword" and not our website
Hi All, This is a layman question but would like to get a concrete answer for. I would like to know how to answer the questions like "Why our competitor is ranking for keyword ABC but not us"? What metrics or data can I showcase that gives logical answer. Please help in this regard. Thanks!
Intermediate & Advanced SEO | | Avin1230 -
72KB CSS code directly in the page header (not in external CSS file). Done for faster "above the fold" loading. Any problem with this?
To optimize for googles page speed, our developer has moved the 72KB CSS code directly in the page header (not in external CCS file). This way the above the fold loading time was reduced. But may this affect indexing of the page or have any other negative side effects on rankings? I made a quick test and google cache seems to have our full pages cached, but may it affect somehow negatively our rankings or that google indexes fewer of our pages (here we have some problems with google ignoring about 30% of our pages in our sitemap".)
Intermediate & Advanced SEO | | lcourse0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Is it better "nofollow" or "follow" links to external social pages?
Hello, I have four outbound links from my site home page taking users to join us on our social Network pages (Twitter, FB, YT and Google+). if you look at my site home page, you can find those 4 links as 4 large buttons on the right column of the page: http://www.virtualsheetmusic.com/ Here is my question: do you think it is better for me to add the rel="nofollow" directive to those 4 links or allow Google to follow? From a PR prospective, I am sure that would be better to apply the nofollow tag, but I would like Google to understand that we have a presence on those 4 social channels and to make clearly a correlation between our official website and our official social channels (and then to let Google understand that our social channels are legitimate and related to us), but I am afraid the nofollow directive could prevent that. What's the best move in this case? What do you suggest to do? Maybe the nofollow is irrelevant to allow Google to correlate our website to our legitimate social channels, but I am not sure about that. Any suggestions are very welcome. Thank you in advance!
Intermediate & Advanced SEO | | fablau9 -
Anyone managed to decrease the "not selected" graph in WMT?
Hi Mozzers. I am working with a very large E-com site that has a big issue with duplicate or near duplicate content. The site actually received a message in WMT listing out pages that Google deemed it should not be crawling. Many of these were the usual pagination / category sorting option URL issues etc. We have since fixed the issue with a combination of site changes, robots.txt, parameter handling and URL removals, however I was expecting the "not selected" graph in WMT to start dropping. The number of roboted pages has increased by around 1 million pages (which was expected) and indexed pages has actually increased despite removing hundreds of thousands of pages. I assume this is due to releasing some crawl bandwidth for more important pages like products. I guess my question is two-fold; 1. Is the "not selected" graph cumulative, as this would explain why it isn't dropping? 2. Has anyone managed to get this figure to significantly drop? Should I even care? I am relating this to Panda by the way. Important to note that the changes were made around 3 weeks ago and I am aware not everything will be re-crawled yet. Thanks,
Intermediate & Advanced SEO | | Further
Chris notselected.jpg0 -
Is my "term & conditions"-"privacy policy" and "About Us" pages stealing link juice?
should i make them no follow? or is this a bogus method?
Intermediate & Advanced SEO | | SEObleu.com0 -
Prowling report says "duplicate titles" for wp-login.php
Hi there, How are you guys doing? I have a quick question. The last prowling report we received said we have three pages with "duplicate titles". Those three pages are: /wp-login.php wp-login.php?action=lostpassword /wp-login.php?action=register I'm a little confused because those pages don't even have a title. Do you think it's a big deal? Also do you have any idea of why the prowling report says those pages have duplicate titles? Apparently, wp-login.php is part of the Wordpress core. It's a built-in page that
Intermediate & Advanced SEO | | Ericc22
handles login and registration. Not something we can edit. Thanks a lot and have a nice day!0 -
Outgoing affiliate links and link juice
I have some affiliate websites which have loads of outgoing affiliate links. I've discussed this with a SEO friend and talked about the effect of the link juice going out to the affiliate sites. To minimize this I've put "no follows" on the affiliate links but my friend says that even if you have no follow Google still then diminishes the amount of juice that goes to internal pages, for example if the page has 10 links, 9 are affiliate with no follow - Google will only give 10% of the juice to the 1 internal page. Does anyone know if this is the case? and whether there are any good techniques to keep as much link juice on the site as possible without transferring to affiliate links? Appreciate any thoughts on this! Cheers
Intermediate & Advanced SEO | | Ventura0