Affiliate links vs. seo (updated 19.02.2014)
-
UPDATE - 19.02.2014:
Hi,
We got another negative answer from Google pointing again to our affiliate links, so the 301 redirect and block was not enough.
I understand the need of contacting all of them and ask for the nofollow, we've started the process, but it will take time, alot of time.So I'd like to bring to your attention another 2 scenarious I have in mind:
1. Disavow all the affiliate links.
Is it possible to add big amount of domains (>1000) to the disavow doc.? Anyone tryed this?2. Serve 404 status for urls coming from affiliates that did not add noffolow attribute.
This way we kinda tell G that content is no longer available, but we will end up with few thousand 404 error pages.
The only way to fix all those errors is by 301 redirecting them afterwards (but this way the link juice might 'restart' flowing and the problem might persist).Any input is welcomed.
Thanks
Hi Mozers,
After a reconsideration request regarding our link profile, we got a 'warning' answer about some of our affiliate sites (links coming from our affiliate sites that violate Google's quality guidelines).
What we did (and was the best solution in trying to fix the 'seo mistake' and not to turn off the affiliate channel) was to 301 redirect all those links to a /AFFN/ folder and block this folder from indexing.
We're still waiting for an answer on our last recon. request.I want to know you opinion about this? Is this a good way to deal with this type of links if they're reported? Changing the affiliate engine and all links on the affiliate sites would be a big time and technical effort, that's why I want to make sure it's truly needed.
Best,
Silviu -
As I said before, a 301 redirect will pass pagerank. Even if it goes to a blocked folder, that's still domain-level benefit coming into your site from "paid" links.
The best solution, in my opinion, is for sites to run their affiliate program through another domain first, and 302 (temporary) redirect the user to the main site.
Affiliate links to www.YourAffiliateDomain.com/?afflink-id=123, which has a domain-wide robots.txt disallow. The ?afflink-id=123 part tells the system where to redirect the user to on the primary domain. The user goes from that URL through a 302 redirect to the appropriate URL on your primary domain.
No pagerank is passed and you can kill off the domain if you ever need to and those redirects will stop coming into the site.
If you are unable to do all of this you can submit a disavow file for all non-compliant affiliate domains after asking them to nofollow their links. I think the limit is supposed to be 2,000 domains, but I've heard of people doing as much as 4,000 with no problem. Just give it a try and see what happens.
-
Hi guys,
I've updated the post with the latest news and switched it to 'discussion'.
Let me know your thoughts.Cheers,
S. -
Thanks for the insight Everett,
That's what I'm afraid of - the 'benefit' at the domain-level.
That's the plan: the affiliates to update their links, but I'm sure the process will not be very fast. -
Hello,
Even though you are blocking that folder the fact remains that you are paying people a commission to place followable links on their site. Since a 301 redirect passes pagerank you are still violating Google's guidelines even if the page two which thy point is blocked in the robots.txt file. This is because, technically, you might still benefit at the domain-level from those links pointing into your domain.
If you turned those links into 302 redirects and/or had the affiliates update them to add nofollow code, it would probably be enough.
-
The thing you did is very appropriate. As mentioned by Oleg Korneitchouk, you must nofollow all those links too.
-
Check out: http://searchengineland.com/googles-matt-cutts-on-affiliate-links-we-handle-majority-of-them-125859
I would message all your aff's and ask them to nofollow, make the new default URL you give to aff's nofollow and keep your 301 redirect thing. In your next RR (if you need it) mention all those steps you took.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO Impact of High Volume Vertical and Horizontal Internal Linking
Hello Everyone - I maintain a site with over a million distinct pages of content. Each piece of content can be thought of like a node in graph database or an entity. While there is a bit of natural hierarchy, every single entity can be related to one or more other entities. The conceptual structure of the entities like so: Agency - A top level business unit ( ~100 pages/urls) Office - A lower level business unit, part of an Agency ( ~5,000 pages/urls) Person - Someone who works in one or more Offices ( ~80,000 pages/urls) Project - A thing one or more People is managing ( ~750,000 pages/urls) Vendor - A company that is working on one or more Projects ( ~250,000 pages/urls) Category - A descriptive entity, defining one or more Projects ( ~1,000 pages/urls) Each of these six entities has a unique (url) and content. For each page/url, there are internal links to each of the related entity pages. For example, if a user is looking at a Project page/url, there will be an internal link to one or more Agencies, Offices, People, Vendors, and Categories. Also, a Project will have links to similar Projects. This same theory holds true for all other entities as well. People pages link to their related Agencies, Offices, Projects, Vendors, etc, etc. If you start to do the math, there are tons of internal links leading to pages with tons of internal links leading to pages with tons of internal links. While our users enjoy the ability to navigate this world according to these relationships, I am curious if we should force a more strict hierarchy for SEO purposes. Essentially, does it make sense to "nofollow" all of the horizontal internal links for a given entity page/url? For search engine indexing purposes, we have legit sitemaps that give a simple vertical hierarchy...but I am curious if all of this internal linking should be hidden via nofollow...? Thanks in advance!
Intermediate & Advanced SEO | | jhariani2 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
International SEO
Hi all, The company that I work for is planning to target some french (and some other foreign) keywords. The thing is, in our industry, you can't just hire someone to translate the content/pages. The pages have to be translated by an accredited translator. Here's the thing, it costs a LOT of money just to translate a few thousand words. So, the CEO decided to translate a few of our 'core' pages and SEO them to see if it brings results. My questions are, would it be possible from a technical point of view to simply translate a few pages? Would that cause a problem for the search engine crawlers? Would those pages be 'seen' as duplicates? Thanks in advance guys!
Intermediate & Advanced SEO | | EdwardDennis0 -
Subdomain Metrics Links??
I have been analysing my companies website against our competitors and we beat them hands down on everything apart from the total links in the subdomain metrics. Our competitor jumped above us a couple of months ago to grab the number one spot for our industries most valuable keyword. They have had a new website designed and after looking at the source code and running it through SEO MOZ in comparison to our site I can't see how they have manged to do it. We beat them hands down on all factors apart from subdomain metrics > Total links where they have twice as many. When it comes to Page Specific Metrics and Root Domain Metrics we easily beat them on all factors. Does anyone have any ideas what I need to do to improve the subdomain metrics? Thanks
Intermediate & Advanced SEO | | Detectamet0 -
SEO on page content links help
I run a website at the bottom we have scroller box which the old SEO guy used to contain all of the crap content so we can rank for keywords not on the page and put all of the links in to spread the link juice into the other inner category pages (some of these pages are only listed on our innerpages otherwise). We are trying to remove this content and add decent content above the fold with relevant long tail keywords in (it is currently decent but could do with expanding if we are removing this large chunk of text in theSEO box and some long tail keywords will be missing if we just remove it) we can add a couple of links into this new content but will struggle to list the category pages not on the left hand navigation. If we were to list all of the pages in the left hand nav would we dilute the power going to the main pages currently or would we be in the same position we are now? For example at the minute I would say the power is mainly going to the left hand nav links and then a small amount of power to the links in the SEO content if we put these into the nav will it not dilute the power to the main pages. Thank you for your time and hopefully your help.
Intermediate & Advanced SEO | | BobAnderson0 -
301 vs Changing Link href
We have changed our company and want to 301 old domain from new domain in order to transfer the benefits of backlinks (DA: 50, 115 Linking Root Domains). I have the ability to modify around 50% of the backlinks. So my question is: Instead of redirecting all the links, should I update the 50% to link to the new domain instead of relying on redirects? Would this possibly trip an algorithmic filter and devalue these links? Or should I just do a 301 and not worry about modifying the links?
Intermediate & Advanced SEO | | Choice0 -
Dynamic Links vs Static Links
There are under 100 pages that we are trying to rank for and we'd like to flatten our site architecture to give them more link juice. One of the methods that is currently in place now is a widget that dynamically links to these pages based on page popularity...the list of links could change day to day. We are thinking of redesigning the page to become more static, as we believe it's better for link juice to flow to those pages reliably than dynamically. Before we do so, we need a second opinion.
Intermediate & Advanced SEO | | RBA0 -
Measurement of Link Value
Over the past few months I have encountered webmasters who claim to be using instruments far better than open site explorer but they will not disclose what they are. Are there better ways of determining the value of a link than OSE? Is "link juice" more important than page/domain authority where the link resides? Or vice-vesa. Any help understanding this would be appreciated. I do not want to offend other webmasters but I also do not want to be fooled by them either while negotiating a link exchange with them
Intermediate & Advanced SEO | | casper4340