Affiliate links vs. seo (updated 19.02.2014)
-
UPDATE - 19.02.2014:
Hi,
We got another negative answer from Google pointing again to our affiliate links, so the 301 redirect and block was not enough.
I understand the need of contacting all of them and ask for the nofollow, we've started the process, but it will take time, alot of time.So I'd like to bring to your attention another 2 scenarious I have in mind:
1. Disavow all the affiliate links.
Is it possible to add big amount of domains (>1000) to the disavow doc.? Anyone tryed this?2. Serve 404 status for urls coming from affiliates that did not add noffolow attribute.
This way we kinda tell G that content is no longer available, but we will end up with few thousand 404 error pages.
The only way to fix all those errors is by 301 redirecting them afterwards (but this way the link juice might 'restart' flowing and the problem might persist).Any input is welcomed.
Thanks
Hi Mozers,
After a reconsideration request regarding our link profile, we got a 'warning' answer about some of our affiliate sites (links coming from our affiliate sites that violate Google's quality guidelines).
What we did (and was the best solution in trying to fix the 'seo mistake' and not to turn off the affiliate channel) was to 301 redirect all those links to a /AFFN/ folder and block this folder from indexing.
We're still waiting for an answer on our last recon. request.I want to know you opinion about this? Is this a good way to deal with this type of links if they're reported? Changing the affiliate engine and all links on the affiliate sites would be a big time and technical effort, that's why I want to make sure it's truly needed.
Best,
Silviu -
As I said before, a 301 redirect will pass pagerank. Even if it goes to a blocked folder, that's still domain-level benefit coming into your site from "paid" links.
The best solution, in my opinion, is for sites to run their affiliate program through another domain first, and 302 (temporary) redirect the user to the main site.
Affiliate links to www.YourAffiliateDomain.com/?afflink-id=123, which has a domain-wide robots.txt disallow. The ?afflink-id=123 part tells the system where to redirect the user to on the primary domain. The user goes from that URL through a 302 redirect to the appropriate URL on your primary domain.
No pagerank is passed and you can kill off the domain if you ever need to and those redirects will stop coming into the site.
If you are unable to do all of this you can submit a disavow file for all non-compliant affiliate domains after asking them to nofollow their links. I think the limit is supposed to be 2,000 domains, but I've heard of people doing as much as 4,000 with no problem. Just give it a try and see what happens.
-
Hi guys,
I've updated the post with the latest news and switched it to 'discussion'.
Let me know your thoughts.Cheers,
S. -
Thanks for the insight Everett,
That's what I'm afraid of - the 'benefit' at the domain-level.
That's the plan: the affiliates to update their links, but I'm sure the process will not be very fast. -
Hello,
Even though you are blocking that folder the fact remains that you are paying people a commission to place followable links on their site. Since a 301 redirect passes pagerank you are still violating Google's guidelines even if the page two which thy point is blocked in the robots.txt file. This is because, technically, you might still benefit at the domain-level from those links pointing into your domain.
If you turned those links into 302 redirects and/or had the affiliates update them to add nofollow code, it would probably be enough.
-
The thing you did is very appropriate. As mentioned by Oleg Korneitchouk, you must nofollow all those links too.
-
Check out: http://searchengineland.com/googles-matt-cutts-on-affiliate-links-we-handle-majority-of-them-125859
I would message all your aff's and ask them to nofollow, make the new default URL you give to aff's nofollow and keep your 301 redirect thing. In your next RR (if you need it) mention all those steps you took.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO Value of Google+?
Hi Mozers, Does having a Google+ page really impact SEO? Thanks, Yael
Intermediate & Advanced SEO | | yaelslater1 -
Https vs Http Link Equity
Hi Guys, So basically have a site which has both HTTPs and HTTP versions of each page. We want to consolidate them due to potential duplicate content issues with the search engines. Most of the HTTP pages naturally have most of the links and more authority then the HTTPs pages since they have been around longer. E.g. the normal http hompage has 50 linking root domains while the https version has 5. So we are a bit concerned of adding a rel canonical tag & telling the search engines that the preferred page is the https page not the http page (where most of the link equity and social signals are). Could there potentially be a ranking loss if we do this, what would be best practice in this case? Thanks, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
What Links to Disavow?
I am looking through my website's link profile that I pulled directly from Google Webmaster Tools. What is the best way to determine the links to disavow? Maybe the Webmaster Tools list is not the best list for this process but I really need to clean up the links that are hurting the site's SEO. Does anyone have any insight?
Intermediate & Advanced SEO | | PartyStore0 -
Linking to URLs With Hash (#) in Them
How does link juice flow when linking to URLs with the hash tag in them? If I link to this page, which generates a pop-over on my homepage that gives info about my special offer, where will the link juice go to? homepage.com/#specialoffer Will the link juice go to the homepage? Will it go nowhere? Will it go to the hash URL above? I'd like to publish an annual/evergreen sort of offer that will generate lots of links. And instead of driving those links to homepage.com/offer, I was hoping to get that link juice to flow to the homepage, or maybe even a product page, instead. And just updating the pop over information each year as the offer changes. I've seen competitors do it this way but wanted to see what the community here things in terms of linking to URLs with the hash tag in them. Can also be a use case for using hash tags in URLs for tracking purposes maybe?
Intermediate & Advanced SEO | | MiguelSalcido0 -
Link building… how to get high rewarding links?
Hi Guys, I have a few people whom I have built relationships up in my industry with that would like to link to my site. Is there any particular things I need to be mindful of before having them link to me? I'm just mindful of the unknown. Also, which links to use etc? Thanks in advance
Intermediate & Advanced SEO | | edward-may0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Link Acquisition - link building
When using Site Explorer to find out my competiters links so I can do some link aquisition SEO do I look for the "inbound" links or or "linking domains"? Also, what filters should I choose? I want to make a spreadsheet as Rand suggested in his video and start to prioritize my link building.
Intermediate & Advanced SEO | | musicforkids0 -
First Link Priority question - image/logo in header links to homepage
I have not found a clear answer to this particular aspect of the "first link priority" discussion, so wanted to ask here. Noble Samurai (makers of Market Samurai seo software) just posted a video discussing this topic and referencing specifically a use case example where when you disable all the css and view the page the way google sees it, many times companies use an image/logo in their header which links to their homepage. In my case, if you visit our site you can see the logo linking back to the homepage, which is present on every page within the site. When you disable the styling and view the site in a linear path, the logo is the first link. I'd love for our first link to our homepage include a primary keyword phrase anchor text. Noble Samurai (presumably seo experts) posted a video explaining this specifically http://www.noblesamurai.com/blog/market-samurai/website-optimization-first-link-priority-2306 and their suggested code implementations to "fix" it http://www.noblesamurai.com/first-link-priority-templates which use CSS and/or javascript to alter the way it is presented to the spiders. My web developer referred me to google's webmaster central: http://www.google.com/support/webmasters/bin/answer.py?answer=66353 where they seem to indicate that this would be attempting to hide text / links. Is this a good or bad thing to do?
Intermediate & Advanced SEO | | dcutt0