"Equity sculpting" with internal nofollow links
-
I’ve been trying a couple of new site auditor services this week and they have both flagged the fact that I have some nofollow links to internal pages.
I see this subject has popped up from time to time in this community. I also found a 2013 Matt Cutts video on the subject:
https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links
At a couple of SEO conferences I’ve attended this year, I was advised that nofollow on internal links can be useful so as not to squander link juice on secondary (but necessary) pages. I suspect many websites have a lot of internal links in their footers and are sharing the love with pages which don’t really need to be boosted. These pages can still be indexed but not given a helping hand to rank by strong pages. This “equity sculpting” (I made that up) seems to make sense to me, but am I missing something?
Examples of these secondary pages include login pages, site maps (human readable), policies – arguably even the general contact page.
Thoughts?
Regards,
Warren -
Useful reference links. Many thanks, Mike.
-
Here's a bit more on the subject.
Matt Cutts PageRank Sculpting 2009
TheSEMPost 2015 - Pagerank sculpting
The SEOBlog Pagerank Sculpting 2014
It just feels like every other year or so, this concept starts coming back up. Except as much as it does work, it also doesn't. Personally I think its a better use of time and effort to look at your site navigation & see if it's user friendly, intuitive, and natural in order to direct flow better and also to work on linkbuilding efforts to increase authority.
-
Thanks, Mike.
Just to be clear, I still want those non-primary internal pages (maybe not human sitemap and login) to be indexed so a robots.txt approach will not completely solve the problem. I just don't want to potentially squander link juice on secondary pages. Footers tend to have quite a bulk of link so there is a lot of dilution there. I had hoped that by halving my links, I'd be doubling the outbound link equity.
The first reference was useful, but only mentions my sculpting goal in the very last sentence without elaborating. The thing I found most interesting was the first comment from Mark Traphagen:
So, if this is true, there's absolutely no equity saving to be had from nofollow'ing internal links to my non-primary pages. But... is it true?! Any experiment results out there?
Finally, with regards to old versions of policies being published, I can't see how that would cause any legal problems. It's the version that is published that is important and, while I can set directives on cache expiry, nobody can be responsible for out-of-date information stored in a third-party cache (unless, of course, it was unlawful at the time of publishing).
-
Adding Nofollow to a handful of links on your site will not magically sculpt link equity in such a way as to create a noticeable improvement like that. If anything, you could just use robots.txt to remove those pages from being crawled. The bots don't necessarily need to index your login page, your human sitemap (if they already have their own), policies (which can change and cause legal issues if an older version is cached), and a few others.
And just a few months ago Gary Illyes stated that there's no good reason to nofollow internal links:
http://www.thesempost.com/google-dont-ever-nofollow-your-own-internal-links/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
Duplicate pages with "/" and without "/"
I seem to have duplicate pages like the examples below: https://example.com https://example.com/ This is happening on 3 pages and I'm not sure why or how to fix it. The first (https://example.com) is what I want and is what I have all my canonicals set too, but that doesn't seem to be doing anything. I've also setup 301 redirects for each page with "/" to be redirected to the page without it. Doing this didn't seem to fix anything as when I use the (https://example.com/) URL it doesn't redirect to (https://example.com) like it's supposed to. This issue has been going on for some time, so any help would be much appreciated. I'm using Squarespace as the design/hosting site.
Technical SEO | | granitemountain0 -
How do I "undo" or remove a Google Search Console change of address?
I have a client that set a change of address in Google Search Console where they informed Google that their preferred domain was a subdomain, and now they want Google to also consider their base domain (without the change of address). How do I get the change of address in Google search console removed?
Technical SEO | | KatherineWatierOng0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Explain me the SEO impact when a website has more internal link compared to less internal links
A website that I am working on has more than 200 internal links (Its because of the design and various kind of service that we offer). I want to know its SEO impact. I also want to know the SEO impact when a website has less internal links compared to more internal links
Technical SEO | | BoniSatani0 -
Mest method of creating internal links
I am curious, what is the best way to link things from an SEO perspective 1. Full Path; http://yoursite.com/links/alink/ 2. Absolute Path /links/yourlink/ 3. Relative Path ../yourlink/ Just curious if there has been any quantifiable data on this. Thanks Zach
Technical SEO | | Zachary_Russell0 -
Cn I use SEOMOZ to find "Bad Links"
We were hit by the Penguin update and I am told it make be because of "Bad Links", but no one can seem to tell me how to find them. We never buy links, and in fact the only links I know about are those from paid affiliates through shareasale - and these affiliates are paid based on performance, not links. 1. Does anyone know how to figure out what links are bad? 2. Once I know, how do I get them to stop linking to my site? Thanks!
Technical SEO | | trophycentraltrophiesandawards0 -
Too many internal links on one page
Hello All, I have just started using SEO moz. I had one quick question i would like answered. Currently SEOmoz is telling me that there are too many internal links. The recommendation is 100 links per page but the majority of my pages have 125+ links Will this effect the page when its crawled? Look forward to your comments. Thanks in advance
Technical SEO | | TWPLC_seo0