How much link juice could be passed?
-
When evaluating a site to decide whether or not to peruse a link, how do you decide if it is passing enough link juice to peruse the matter?
-
Hi Runnerkik,
As Paul suggested using Opensiteexplorer.org is a good way to go. When PR was everything back in the day i just the following formula to calculate the PR flow of the link:
PR{flow} = PR {page} / N{links} * 0,85 + 0,15
So for a page with PR of 3 and 115 links on page you get:
PR{flow} = (3 / 115) * 0,85 + 0,15 = 0.17217 points of link juice. When you reached enough inbound link juice you would go up a PR level.
Since we no longer (primarily) look at PR of a page for building links (because we want to earn links) you get less of a say in the matter. If someone embeds your link in their site there is not much you can do about but contact them (if you don't want it).
Writing good content created a natural flow of inbound links combine this with social media and you get a consistent inbound picture which would be appreciated by the search engines.
If the site has a nice domain authority figure and the page your going for has a nice page authority figure you'll know how to pursue that.
Still you can always assess if it is a link you want using the formula. It always worked for me (and still does).
hope this helps
kind regards
Jarno
-
Hello Runnerkik!
Personally, I like to give any websites I find as potential 'Links', I run them through an intense 'Open Site Explorer' check for Domain Authority and any linking domain that are present. If you look at the middle section 'Domain Authority' for the websites that are linking TO the link you are trying to get, you can get quite a good idea.
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How i get link to my website
hi i'm very new in seo want to have links to my website:www.warningbroker.com how i can get links to my website?
Intermediate & Advanced SEO | | marketing660 -
Google WMT Turning 1 Link into 4,000+ Links
We operate 2 ecommerce sites. The About Us page of our main site links to the homepage of our second site. It's been this way since the second site launched about 5 years ago. The sites sell completely different products and aren't related besides both being owned by us. In Webmaster Tools for site 2, it's picking up ~4,100 links coming to the home page from site 1. But we only link to the home page 1 time in the entire site and that's from the About Us page. I've used Screaming Frog, IT has looked at source, JavaScript, etc., and we're stumped. It doesn't look like WMT has a function to show you on what pages of a domain it finds the links and we're not seeing anything by checking the site itself. Does anyone have experience with a situation like this? Anyone know an easy way to find exactly where Google sees these links coming from?
Intermediate & Advanced SEO | | Kingof50 -
Technical Question on Image Links - Part of Addressing High Number of Outbound Links
Hi - I've read through the forum, and have been reading online for hours, and can't quite find an answer to what I'm searching for. Hopefully someone can chime in with some information. 🙂 For some background - I am looking closely at four websites, trying to bring them up to speed with current guidelines, and recoup some lost traffic and revenue. One of the things we are zeroing in on is the high amount of outbound links in general, as well as inter-site linking, and a nearly total lack of rel=nofollow on any links. Our current CMS doesn't allow an editor to add them, and it will require programming changes to modify any past links, which means I'm trying to ask for the right things, once, in order to streamline the process. One thing that is nagging at me is that the way we link to our images could be getting misconstrued by a more sensitive Penguin algorithm. Our article images are all hosted on one separate domain. This was done for website performance reasons. My concern is that we don't just embed the image via , which would make this concern moot. We also have an href tag on each to a 'larger view' of the image that precedes the img src in the code, for example - We are still running the numbers, but as some articles have several images, and we currently have about 85,000 articles on those four sites... well, that's a lot of href links to another domain. I'm suggesting that one of the steps we take is to rel=nofollow the image hrefs. Our image traffic from Google search, or any image search for that matter, is negligible. On one site it represented just .008% of our visits in July. I'm getting a little pushback on that idea as having a separate image server is standard for many websites, so I thought I'd seek additional information and opinions. Thanks!
Intermediate & Advanced SEO | | MediaCF0 -
Do 404 Pages from Broken Links Still Pass Link Equity?
Hi everyone, I've searched the Q&A section, and also Google, for about the past hour and couldn't find a clear answer on this. When inbound links point to a page that no longer exists, thus producing a 404 Error Page, is link equity/domain authority lost? We are migrating a large eCommerce website and have hundreds of pages with little to no traffic that have legacy 301 redirects pointing to their URLs. I'm trying to decide how necessary it is to keep these redirects. I'm not concerned about the page authority of the pages with little traffic...I'm concerned about overall domain authority of the site since that certainly plays a role in how the site ranks overall in Google (especially pages with no links pointing to them...perfect example is Amazon...thousands of pages with no external links that rank #1 in Google for their product name). Anyone have a clear answer? Thanks!
Intermediate & Advanced SEO | | M_D_Golden_Peak0 -
SEO from links in frames?
A site was considering linking to us. Their web page is delivered entirely via frames. Humans can see the links on the page, but it's not visible in source. I'm guessing it means Google can't detect the links, and there is no SEO effect, but I wanted to confirm. Here's the site: http://www.uofc-ulsa.tk/ Example links are the Princeton Review and Kaplan on the right sidebar. Here's the source code: view-source:http://www.uofc-ulsa.tk/ Do those links have any SEO impact?
Intermediate & Advanced SEO | | lighttable0 -
Static links google guidelines
Google recommends to have static links it in guidelines Are breadcrumbs and static text link the same ? or in addition to breadcrumbs do I need static links on my page going from page A to B etc... The issue I have with static links this way is that if I look at the PR paper that would decrease the juice of my homepage ( which is the page I want to give the most juice to ) Thx,
Intermediate & Advanced SEO | | seoanalytics0 -
Is my "term & conditions"-"privacy policy" and "About Us" pages stealing link juice?
should i make them no follow? or is this a bogus method?
Intermediate & Advanced SEO | | SEObleu.com0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0