Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Will editorial links with UTM parameters marked as utm_source=affiliate still pass link juice?
-
Occasionally some of our clients receive editorial mentions and links in which the author adds utm parameters to the outbound links on their blog. The links are always natural, never compensated, and followed. However, they are sometimes listed as utm_source=affiliate even thought we have no existing affiliate relationship with the author. My practice has been to ask the author to add a rel="norewrite" attribute to the link to remove any trace of the word affiliate.
I have read that utm parameters do not affect link juice transfer, however, given the inaccurate "affiliate" source, I wouldn't want Google to misunderstand and think that we are compensating people for followed editorial links.
Should I continue following this practice, or is it fine to leave these links as they are?
Thanks!
-
Thank you Eric. It's definitely a gray area, I had considered your suggestion about requesting the author change the source parameter to something other than affiliate- and that's a great idea too.
I welcome more dialogue on this from others who may have some input.
-
I would think the parameters wouldn't count, but the root URL would (eg; search engines ignore anything after the "&" or "#" in the URL). I think Google devalues affiliate links, because those aren't "editorial" links - they're essentially paid links. It's really hard to say whether the links will be determined as "affiliate" by Google since the author is adding that in the tagging (that word may serve as a flag). My recommendation is to reach out and ask if they can change that URL tagging, since you're not an affiliate and don't want to be seen as one. UTM parameters are common for campaign tracking, and they don't influence the URL in terms of passing juice or whatever, so really you can put whatever you want. Those exist so you can define attribution models more effectively to learn what campaign provides the best ROI for your company/website.
I'd try to change that parameter, and maybe making the case that the author adding parameters like that to your URL is hurting your tracking (getting mixed in with real affiliates). It seems kind of weird to me that an author would add a tracking parameter like that without someone asking, but maybe that happens more than I realize.
Let me know how it works out - I haven't seen this case before so if others have experience I'd be interested to see how people have handled it in the past.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I predict quality of inbound link before using Disavow links tool?
I am working on Ecommerce website and getting issues with bad inbound links. I am quite excited to use Disavow links tools for clean up bad inbound links and sustain my performance on Google. We have 1,70,000+ inbound links from 1000+ unique root domains. But, I have found maximum root domains with low quality content and structure. Honestly, I don't want to capture inbound links from such websites who are not active with website and publishing. I am quite excited to use Disavow links tool. But, How do I predict quality of inbound links or root domains before using it? Is there any specific criteria to grade quality of inbound links?
Industry News | | CommercePundit0 -
Google still showing sitelinks from old website
Hi guys, we relaunched our website www.segafredo.com.au a few weeks ago, however google is still showing site links from our old page that no longer exist... Is there anything we can do about this? Sit back and wait or try demoting the old urls in webmaster tools? Looking forward to see your tips! Ciao, Manny.
Industry News | | Immanuel0 -
Changing Domains - How much link juice is lost with 301 redirect?
My company is thinking about rebranding and moving over to a new domain. While we dont have a lot of backlinks, we do have some very valuable ones that we hate to lose. That being said, I think we are in such an infancy that the backlinks we have shouldnt prevent us from rebranding if thats what we choose to do. I am just trying to get an idea of how moving to a new domain will effect the domain authority if we redirect all the pages? Is the best thing to do simply re-direct, or should we reach out to our most valuable links and let them know the domain/link has changed and hopefully they change their link to us? How much is lost by simply 301 every page? We are getting around 70 organic clicks per day and would rather not start from zero again 🙂
Industry News | | DemiGR0 -
Is this still Google?
My niche, my concern.
Industry News | | webfeatus
http://www.google.com/search?q=jimbaran+villa
My site just dropped out of the rankings completely. But if you look at the Google search above you will notice 2 things:
1. First page: 75% of space above the fold is dedicated to Google making money
2. Subsequent pages: It is like you don't actually search "Google" If you flip through a few pages what you actually search is:
agoda.com
flipkey.com
tripadvisor.com
homeaway.com Do I have a point or am I simply having a cynical day?1 -
Will Google ever begin penalising bad English/grammar in regards to rankings and SEO?
Considering Google seem to be on a great crusade with all their algorithm updates to raise the overall "quality" of content on the Internet, i'm a bit concerned with their seeming lack of action towards penalising sites that contain terrible English. I'm sure you've all noticed this when you attempt to do some proper research via Google and come across an article that "looks" to be what you're after, then you click through and realise it's obviously been either put together in a rush by someone not paying attention or putting much effort in, or been outsourced for cheap labour to another country whose workers aren't (close to being) native speakers. It's getting really old trying to make sense of articles that have completely incorrect grammar, entirely missing words, verb tenses that don't make any sense, randomly over-extravagant adjectives thrown in just as padding, etc. etc. No offense to all those from non-native speaking countries who are attempting to make a few bucks online, but this for me is becoming by far more of an issue in terms of "quality" of information online as opposed to some of the other search issues that are being given higher priority, and it just seems strange that Google have been so blasé about it up to this point - especially given so many of these articles and pages are nothing more than outsourced filler for cheap traffic. I understand it's probably hard to code in something so advanced, but it would go a long way towards making the web a better place in my opinion. Anyone else feeling the same way? Thoughts?
Industry News | | ExperienceOz1 -
When will Rand put out "Art of SEO 2nd Edition"? (ANSWER: IN ABOUT 2 WEEKS)
First edition was printed in the end of 2009. Great Book. Needs updating of course. I would buy the next edition if it was updated in an awesome way that I know Rand and the others would do.
Industry News | | stubby0 -
Hello, Actually I have bit of doubt. If I create Google plus business page. Will it helpful or effects for my website ranking?
If I create Google plus business page. Will it helpful or effects for my website ranking?
Industry News | | jaybinary0 -
Is a canonical to itself a link juice leak
Duane Forrester from Bing said that you should not have a canonical pointing back to the same page as it confuses Bingbot,
Industry News | | AlanMosley
“A lot of websites have rel=canonicals in place as placeholders within their page code. Its best to leave them blank rather than point them at themselves. Pointing a rel=canonical at the page it is installed in essentially tells us “this page is a copy of itself. Please pass any value from itself to itself.” No need for that.” He also stated that a canonical is much like a 301 except that it does not physically move the user to the canonical page. This leads me to think that having such a tag may leak link juice. “Please pass any value from itself to itself”
Google has stated that GoogleBot can handle such a tag, but this still does not mean that it is not leaking link juice.0