Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Do backlinks need to be clicked to pass linkjuice?
-
Hi all:
Do backlinks need to be clicked to pass linkjuice? Is so, can someone explain how much traffic is needed from a backlink to count as linkjuice?
Thanks for the help.
Audrey.
-
Backlinks do not have to be clicked in order for them to count as linkjuice. Recently my org (missionquest.org) joined MOZ and it helped our backlinks and improved our SEO.
-
I would be surprised.
Google knows a lot, but not everything. Unless GA tracking code is installed google shall not know about things such a user click.
If they were passing page juices only for clicked backlink they would be ruling out a too big chunk of the web. It doesn't sound logic to me.
Also it doesn't sound realistic to analyze all users click in the world when refreshing google index, they do have a lot of metal, but not that much.
-
So, are you saying that a link having traffic kind of disqualifies it as spammy? Or at least in the eyes of Google?
-
Absolutely not. Spam links still work fantastic for ranking a site (temporarily). Those are links that never get seen or clicked, they pretty much just get crawled. Don't go the spam route, but also don't worry too much about people clicking links. I've gotten a ton of great links that have sent very, very little referral traffic, meaning links on popular posts still don't guarantee getting any/many clicks.
-
I don't think so. I usually fetch and render then submit my pages anytime I add one to my site, or make a significant change, like adding content or changing images. Nothing unnatural about it.
-
Good idea. I wonder if it would seem "un-natural" however?
-
Submitting the page to Google for Indexing doesn't guarantee that the backlinks will be crawled, but it can be a good way to try to force them to be crawled.
-
In that case, wouldn't it be ideal to submit the page to google indexing right after it's published?
-
I think it's about Page popularity and users engagements. Popularity in search results means a lot of spiders in the page. And, when a user clicks the link, there's a spider follows him to the new page. And it's all about the spider discovered your page and your link as well (as I think).
-
In fact, it's not like that.
I will tell you a very important rule about backlinks and really hard to find it. Tha main point is that the link need to be discovered by Google. And, the page which contain the link must have popularity in Google search results which mean a lot of people entering the page through search results. This what we call "the Quality of the link"
Keep up with your link building journey.
-
The way that I understand it is that the click helps the link to be found faster than if it had not been clicked. It might have equity and pass link juice prior, but before Google finds it, it might not be counted as a link to your site. Does that make sense? The link needs to be discovered before the link juice is actually counted. At least that is the way that I understand it.
I do know a few professionals who believe that if a link isn't clicked link juice is never passed. I don't know if that is necessarily true. It makes sense that a link could be discovered but not have any equity because it isn't being used. I wonder if someone has a better idea of whether or not that is true, or if it another secret Google keeps

Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
For FAQ Schema markup, do we need to include every FAQ that is on the page in the markup, or can we use only selected FAQs?
The website FAQ page we are working on has more than 50 FAQs. FAQ Schema guidelines say the markup must be an exact match with the content. Does that mean all 50+ FAQs must be in the mark-up? Or does that mean the few FAQs we decided to put in the markup are an exact match?
Intermediate & Advanced SEO | | PKI_Niles0 -
Backlink audit - anyone know of a good tool for manually checking backlinks?
Hi all, I'm looking at how to handle backlinks on a site, and am seeking a tool into which I can manually paste backlinks - is there a good backlink audit tool that offers this functionality? Please let me know! Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart0 -
Click To Reveal vs Rollover Navigation Better For Organic?
Hi, Any thoughts, data or insights as which is better in a top navigation... click to reveal the nav links or rollover to reveal the nav links? Regular content in an accordion (click to reveal) is evidently not best practice. Does that apply to navigation as well? Thanks! Best... Mike
Intermediate & Advanced SEO | | 945010 -
What's the best way to noindex pages but still keep backlinks equity?
Hello everyone, Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages? For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page? The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
Intermediate & Advanced SEO | | fablau3 -
Anyone actually getting a noticeable SEO boost from a Bitly or TinyURL backlink?
Hi, I'm looking for an example/use case of someone whose site has been linked to from another using a Bitly, or other generic URL shortener link. I'm specifically interested in proving/disproving the value of the backlink in terms of boost in SEO rankings. Ideally you somehow got a juicy backlink from a reputable site, but they accidentally linked to you using a Bitly or something, yet you saw a noticeable increase in your pages search rankings, thus proving the value of a Bitly link still passing all SEO value. Or alternatively, you got that juicy backlink and noticed nothing at all, or not much, and are frustrated that they used a BItly. I'm launching a study on this soon to identify the possible value behind short links as backlinks. Yes, I know that Matt Cutts says all short links are 301 redirects which passes something like 99.9% of link juice. I'd just like to see some use cases on this. Thanks!
Intermediate & Advanced SEO | | Rebrandly0 -
Lazy Loading of products on an E-Commerce Website - Options Needed
Hi Moz Fans. We are in the process of re-designing our product pages and we need to improve the page load speed. Our developers have suggested that we load the associated products on the page using Lazy Loading, While I understand this will certainly have a positive impact on the page load speed I am concerned on the SEO impact. We can have upwards of 50 associated products on a page so need a solution. So far I have found the following solution online which uses Lazy Loading and Escaped Fragments - The concern here is from serving an alternate version to search engines. The solution was developed by Google not only for lazy loading, but for indexing AJAX contents in general.
Intermediate & Advanced SEO | | JBGlobalSEO
Here's the official page: Making AJAX Applications Crawlable. The documentation is simple and clear, but in a few words the solution is to use slightly modified URL fragments.
A fragment is the last part of the URL, prefixed by #. Fragments are not propagated to the server, they are used only on the client side to tell the browser to show something, usually to move to a in-page bookmark.
If instead of using # as the prefix, you use #!, this instructs Google to ask the server for a special version of your page using an ugly URL. When the server receives this ugly request, it's your responsibility to send back a static version of the page that renders an HTML snapshot (the not indexed image in our case). It seems complicated but it is not, let's use our gallery as an example. Every gallery thumbnail has to have an hyperlink like: http://www.idea-r.it/...#!blogimage=<image-number></image-number> When the crawler will find this markup will change it to
http://www.idea-r.it/...?_escaped_fragment_=blogimage=<image-number></image-number> Let's take a look at what you have to answer on the server side to provide a valid HTML snapshot.
My implementation uses ASP.NET, but any server technology will be good. var fragment = Request.QueryString[``"_escaped_fragment_"``];``if (!String.IsNullOrEmpty(fragment))``{``var escapedParams = fragment.Split(``new``[] { ``'=' });``if (escapedParams.Length == 2)``{``var imageToDisplay = escapedParams[1];``// Render the page with the gallery showing ``// the requested image (statically!)``...``}``} What's rendered is an HTML snapshot, that is a static version of the gallery already positioned on the requested image (server side).
To make it perfect we have to give the user a chance to bookmark the current gallery image.
90% comes for free, we have only to parse the fragment on the client side and show the requested image if (window.location.hash)``{``// NOTE: remove initial #``var fragmentParams = window.location.hash.substring(1).split(``'='``);``var imageToDisplay = fragmentParams[1]``// Render the page with the gallery showing the requested image (dynamically!)``...``} The other option would be to look at a recommendation engine to show a small selection of related products instead. This would cut the total number of related products down. The concern with this one is we are removing a massive chunk of content from he existing pages, Some is not the most relevant but its content. Any advice and discussion welcome 🙂0 -
Click Through Rate on Password Protected Pages
Hi Moz community, I have a website that has a large database with 800+ important pages, and want Google to know when people visit and stay on these pages. However, these pages are only accessible to people once they create an account with a password, and sign in. I know that since these pages are password protected, Google doesn't index them, but when our visitors stay for a while on our site browsing through our database, does this data get included in our CTR and Bounce Rate by Google? This is really important for Google to know about our database (that people are staying on our site for a while) for SEO purposes, so I wanted to know that if the CTR gets measured even though these pages aren't crawled. Thanks for the help!!
Intermediate & Advanced SEO | | danstern0 -
How long does a new domain need to get a specific level of trust?
We are a small start-up in germany in the Sports and health sector. We currently are building a network of people in that sector and give each person a seperate wordpress blog. The idea is to create a big network of experts. My question is: How long is the period for google to trust a completely new URL? We set up each project and create content on the page. Each week the owner of the site puts up an expert article that contain keywords. And we set certain links from other blogs, etc. Also, do you think it is more important for a site to get say, 20 backlinks from anywhere. Or 5 backlinks from very trusted blogs, etc.?
Intermediate & Advanced SEO | | wellbo0