301-Redirects, PageRank, Matt Cutts, Eric Enge & Barry Schwartz - Fact or Myth?
-
I've been trying to wrap my head around this for the last hour or so and thought it might make a good discussion. There's been a ton about this in the Q & A here, Eric Enge's interview with Matt Cutts from 2010 (http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml) said one thing and Barry Schwartz seemed to say another: http://searchengineland.com/google-pagerank-dilution-through-a-301-redirect-is-a-myth-149656
Is this all just semantics? Are all of these people really saying the same thing and have they been saying the same thing ever since 2010? Cyrus Shepherd shed a little light on things in this post when he said that it seemed people were confusing links and 301-redirects and viewing them as being the same things, when they really aren't. He wrote "here's a huge difference between redirecting a page and linking to a page." I think he is the only writer who is getting down to the heart of the matter. But I'm still in a fog.
In this video from April, 2011, Matt Cutts states very clearly that "There is a little bit of pagerank that doesn't pass through a 301-redirect." continuing on to say that if this wasn't the case, then there would be a temptation to 301-redirect from one page to another instead of just linking.
VIDEO - http://youtu.be/zW5UL3lzBOA
So it seems to me, it is not a myth that 301-redirects result in loss of pagerank.
In this video from February 2013, Matt Cutts states that "The amount of pagerank that dissipates through a 301 is currently identical to the amount of pagerank that dissipates through a link."
VIDEO - http://youtu.be/Filv4pP-1nw
Again, Matt Cutts is clearly stating that yes, a 301-redirect dissipates pagerank.
Now for the "myth" part. Apparently the "myth" was about how much pagerank dissipates via a 301-redirect versus a link.
Here's where my head starts to hurt:
Does this mean that when Page A links to Page B it looks like this:
A -----> ( reduces pagerank by about 15%)-------> B (inherits about 85% of Page A's pagerank if no other links are on the page
But say the "link" that exists on Page A is no longer good, but it's still the original URL, which, when clicked, now redirects to Page B via a URL rewrite (301 redirect)....based on what Matt Cutts said, does the pagerank scenario now look like this:
A (with an old URL to Page B) ----- ( reduces pagerank by about 15%) -------> URL rewrite (301 redirect) - Reduces pagerank by another 15% --------> B (inherits about 72% of Page A's pagerank if no other links are on the page)
Forgive me, I'm not a mathematician, so not sure if that 72% is right?
It seems to me, from what Matt is saying, the only way to avoid this scenario would be to make sure that Page A was updated with the new URL, thereby avoiding the 301 rewrite?
I recently had to re-write 18 product page URLs on a site and do 301 redirects. This was brought about by our hosting company initiating rules in the back end that broke all of our custom URLs. The redirects were to exactly the same product pages (so, highly relevant). PageRank tanked on all 18 of them, hard. Perhaps this is why I am diving into this question more deeply.
I am really interested to hear your point of view
-
Yes Doug, you totally get my confusion. Your scenarios describe more clearly exactly what I am wondering. In the case of your third example, Matt even stated pretty clearly in the video (perhaps even both videos) that chains of redirects can be a problem.
I totally agree with you that avoiding redirects altogether and updating the links is the way to go. Even Google's own Pagespeed Insight's tool often makes this recommendation when evaluating pagespeed of a site. If 301's are exactly the same as links, why would the tool recommend avoiding them?
Yes, I think perhaps Matt said what he did because he was looking at 301s and links in complete isolation. If so, then what he says is believable in theory, but I can't think of how it would actually happen in practice.
-
It is confusing and it's something I was wondering when I first saw the Matt Cutts, Feb 2013 video. From what Matt says:
- We know that a link won't pass all the page rank. Some page rank disipates over each link.
- the amount of page rank that dissipates though a 301 is identical to the amount that passes through a link.
But, I guess the problem with understanding this is that you can't take 301s and links and consider them in isolation. It's not an either/or.
Consider the following:
1. Page 1 -[link to]-> Page 2
Nice and simple, page 2 gets it's full entitlement of page rank ( taking into account share/link and dissipation)
2. Page 1 -[link to]-> 301 -> Page 3
Now I've got an extra step. Does this mean that the page rank that Page 3 inherits is affected by both the link and then the 301? Does the page rank dissipation happen twice?
If, say 50% (not real numbers!) of page rank value is lost for each link/301, then the original link to the 301 would lose %50 and the 301 would lose the same, (50% of the 50%) which means that page 3 get's just 25%
What if I end up in the horrible situation of having
3. Page 1 -[link to]-> 301 -> 301 -> 301 -> Page 3
Does page rank decay happen on every redirect?
Personally, I've only used redirects where necessary and, where I can, I've tried to get inbound links updated to point to the correct page.
-
Dana,
When you say "inherits about 72% of Page A's pagerank if no other links are on the page", I think that's where your understanding goes off track....either that, or it's where mine goes off track, because my understanding is that the percentage of PR that is passed from one page to another page is based on an unknown "X amount", not on the linking page's toolbar pagerank. I think is better to say ...inherits about 72% of the pagerank that page A is able to pass...---not 72% of Page A's pagrerank. Does that make sense?
-
In your second example above, the link would still pass 85% pagerank not 72%. Obviously, in order for a 301 to pass pagerank, it needs to be used in a link. If a 301 link only passed 72% pagerank, then it would always pass less pagerank than a regular link, which would contradict what Matt said.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any risks involved in redirecting low quality Infringement website?
Hi all, Recently we have over taken one of the websites (with Trademark Infringement )who been using our domain name in their domain. That website got no traffic or backlinks. Is there any risk involved in redirecting that website to our website? Thanks
Algorithm Updates | | vtmoz0 -
SEO Myth-Busters -- Isn't there a "duplicate content" penalty by another name here?
Where is that guy with the mustache in the funny hat and the geek when you truly need them? So SEL (SearchEngineLand) said recently that there's no such thing as "duplicate content" penalties. http://searchengineland.com/myth-duplicate-content-penalty-259657 by the way, I'd love to get Rand or Eric or others Mozzers aka TAGFEE'ers to weigh in here on this if possible. The reason for this question is to double check a possible 'duplicate content" type penalty (possibly by another name?) that might accrue in the following situation. 1 - Assume a domain has a 30 Domain Authority (per OSE) 2 - The site on the current domain has about 100 pages - all hand coded. Things do very well in SEO because we designed it to do so.... The site is about 6 years in the current incarnation, with a very simple e-commerce cart (again basically hand coded). I will not name the site for obvious reasons. 3 - Business is good. We're upgrading to a new CMS. (hooray!) In doing so we are implementing categories and faceted search (with plans to try to keep the site to under 100 new "pages" using a combination of rel canonical and noindex. I will also not name the CMS for obvious reasons. In simple terms, as the site is built out and launched in the next 60 - 90 days, and assume we have 500 products and 100 categories, that yields at least 50,000 pages - and with other aspects of the faceted search, it could create easily 10X that many pages. 4 - in ScreamingFrog tests of the DEV site, it is quite evident that there are many tens of thousands of unique urls that are basically the textbook illustration of a duplicate content nightmare. ScreamingFrog has also been known to crash while spidering, and we've discovered thousands of URLS of live sites using the same CMS. There is no question that spiders are somehow triggering some sort of infinite page generation - and we can see that both on our DEV site as well as out in the wild (in Google's Supplemental Index). 5 - Since there is no "duplicate content penalty" and there never was - are there other risks here that are caused by infinite page generation?? Like burning up a theoretical "crawl budget" or having the bots miss pages or other negative consequences? 6 - Is it also possible that bumping a site that ranks well for 100 pages up to 10,000 pages or more might very well have a linkuice penalty as a result of all this (honest but inadvertent) duplicate content? In otherwords, is inbound linkjuice and ranking power essentially divided by the number of pages on a site? Sure, it may be some what mediated by internal page linkjuice, but what's are the actual big-dog issues here? So has SEL's "duplicate content myth" truly been myth-busted in this particular situation? ??? Thanks a million! 200.gif#12
Algorithm Updates | | seo_plus0 -
AMP pages - should we be creating AMP versions of all site pages?
Hi all, Just wondering what people's opinions are on AMP pages - having seen the Google demo of how AMP pages will be given visibility on page one of Google for news-based content, do you think it is worth considering creating AMP versions of all pages, ready for when Google expands its inclusion of these super-fast pages?
Algorithm Updates | | A_Q1 -
Does having a few URLs pointing to another url via 301 "create" duplicate content?
Hello! I have a few URLs all related to the same business sector. Can I point them all at my home domain or should I point them to different relevant content within it? Ioan
Algorithm Updates | | IoanSaid1 -
Content Caching Memory & Removal of 301 Redirect for Relieving Links Penalty
Hi, A client site has had very poor link legacy, stretching for over 5 years. I started the campaign a year ago, providing valuable good quality links. Link removals and creating a disavow to Google have been done, however after months and months of waiting nothing has happened. If anything, after the recent penguin update, results have been further affected. A 301 redirect was undertaken last year, consequently associating those bad links with the new site structure. I have since removed the 301 redirect in an attempt to detach this legacy, however with little success. I have read up on this and not many people appear to agree whether this will work. Therefore, my new decision is to start a fresh using a new domain, switching from the .com to .co.uk version, helping remove all legacy and all association with the spam ridden .com. However, my main concern with this is whether Google will forever cach content from the spammy .com and remember it, because the content on the new .co.uk site will be exactly the same (content of great quality, receiving hundreds of visitors each month from the blog section along) The problem is definitely link related and NOT content as I imagine people may first query. This could then cause duplicate content, knowing that this content pre-existed on another domain - I will implement a robots.txt file removing all of the .com site , as well as a no index no follow - and I understand you can present a site removal to Google within webmaster tools to help fast track the deindexation of the spammy .com - then once it has been deindexed, the new .co.uk site will go live with the exact same content. So my question is whether Google will then completely forget that this content has ever existed, allowing me to use exactly the same content on the new .co.uk domain without the threat of a duplicate content issue? Also, any insights or experience in the removal of a 301 redirect, detaching legacy and its success would also be very helpful! Thank you, Denver
Algorithm Updates | | ProdoDigital0 -
Server Location & SEO
So I just read an interesting Tweet: #SEO Tip: #Google takes into account the location of the server (the IP) when projecting the search results #web This is something I had not thought of. I suppose my question then is HOW does it factor this information into it's results? For some reason, one of our sites is hosted on a Canadian server. We are a cloud hosting company and we serve all of NA with data centers in the US and Canada... For whatever reason we've used the Canadian server farm for our web server. Could this possibly be hurting our NA google SERPs? Anyone have any thoughts on this?
Algorithm Updates | | jesse-landry0 -
Long term plan for a large htaccess file with 301 redirects
We setup a pretty large htaccess file in February for a site that involved over 2,000 lines of 301 redirects from old product url's to new ones. The 'old urls' still get a lot of traffic from product review sites and other pretty good sites which we can't change. We are now trying to reduce the page load times and we're ticking all of the boxes apart from the size of the htaccess file which seems to be causing a considerable hang on load times. The file is currently 410kb big! My question is, what should I do in terms of a long terms strategy and has anyone came across a similar problem? At the moment I am inclined to now remove the 2,000 lines of individual redirects and put in a 'catch all' whereby anything from the old site will go to the new site homepage. Example code: RedirectMatch 301 /acatalog/Manbi_Womens_Ear_Muffs.html /manbi-ear-muffs.html
Algorithm Updates | | gavinhoman
RedirectMatch 301 /acatalog/Manbi_Wrist_Guards.html /manbi-wrist-guards.html There is no consistency between the old urls and the new ones apart from they all sit in the subfolder /acatalog/0 -
Is it OK to 301 redirect the index page to a search engine friendly url
Is it OK to 301 redirect the index page to a search engine friendly url.
Algorithm Updates | | WinningInch0