Reciprocal Links and nofollow/noindex/robots.txt
-
Hypothetical Situations:
- You get a guest post on another blog and it offers a great link back to your website. You want to tell your readers about it, but linking the post will turn that link into a reciprocal link instead of a one way link, which presumably has more value. Should you nofollow your link to the guest post?
My intuition here, and the answer that I expect, is that if it's good for users, the link belongs there, and as such there is no trouble with linking to the post. Is this the right way to think about it? Would grey hats agree?
- You're working for a small local business and you want to explore some reciprocal link opportunities with other companies in your niche using a "links" page you created on your domain. You decide to get sneaky and either noindex your links page, block the links page with robots.txt, or nofollow the links on the page. What is the best practice?
My intuition here, and the answer that I expect, is that this would be a sneaky practice, and could lead to bad blood with the people you're exchanging links with. Would these tactics even be effective in turning a reciprocal link into a one-way link if you could overlook the potential immorality of the practice? Would grey hats agree?
-
-
Yes, your link back to the other site is in good faith and good for readers. If you don't do it too much, you shouldn't get dinged for recip linking.
-
About 4 or 5 years ago I used to see sites do this, usually using the robots.txt file to exclude spidering ot their links page. i don't know if it;'s the "best practice" but it seems robots,txt was used more often than noindex on the page.
It's a sleazy thing to do and yes, it can cause bad blood with your link partners. I know because on more than one occasion I informed sites about that practice being used on them, and they removed their outbound links and thanked me for pointing out how they were being played for chumps.
-
-
Thanks, Ryan. I appreciate the answers, especially for the second question. Link exchanges aren't really my style as far as link building is concerned, but it kind of popped into my head as a result of the first question, so I figured I'd throw it out there. Thanks for the responses!
-
Hi Anthony.
Your first question asks how to inform your site's readers about a blog article you created on another site, without negatively impacting the link juice you are receiving from the article (i.e. creating a reciprocal link).
One possibility is mentioning the article without linking to it. "Check out my article on Grey Hat SEO at the SEOmoz site". Another method along the same lines is to use this same practice and specifically mention the article without linking to it: http://www.seomoz.org/grey-hat-seo (fictitious link). Since there is no actual link, you do not need to add nofollow and no link juice is lost.
You can also tweet the link or post it on facebook or another social sharing site. If you show your tweets on your site, this tactic would not be as productive due to the reciprocal link which you were trying to avoid being created.
You can also get creative: "Check out my new article on Grey Hat SEO tactics. It ranks #1 in Google! Click here to see" and then you provide a link to Google which shows the search results. Your reader would presumably click that result and you not only send the user to your article, but also send some positive signals to Google at the same time.
As for your second question, "How can I backstab my linking partners and get away with it?", blocking the page with robots.txt would work, but it disrupts the flow of link juice throughout your site. Adding the noindex tag to the page is preferable but also more obvious to your linking partners. Adding the nofollow tag to all the links will cost you a lot of link juice. Another method would be to present the links in a properly constructed iframe which Google does not crawl. May I just add I hate strongly dislike this type of question?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should you bother disallowing low quality links with brand/non-commercial anchor text?
Hi Guys, Doing a link audit and have come across lots of low quality web directories pointing to the website. Most of the anchor text of these directories are the websites URL and not comercial/keyword focused anchor text. So if thats the case should we even bother doing a link removal request via google webmaster tools for these links, as the anchor text is non-commercial? Cheers.
Intermediate & Advanced SEO | | spyaccounts140 -
How does Googlebot evaluate performance/page speed on Isomorphic/Single Page Applications?
I'm curious how Google evaluates pagespeed for SPAs. Initial payloads are inherently large (resulting in 5+ second load times), but subsequent requests are lightning fast, as these requests are handled by JS fetching data from the backend. Does Google evaluate pages on a URL-by-URL basis, looking at the initial payload (and "slow"-ish load time) for each? Or do they load the initial JS+HTML and then continue to crawl from there? Another way of putting it: is Googlebot essentially "refreshing" for each page and therefore associating each URL with a higher load time? Or will pages that are crawled after the initial payload benefit from the speedier load time? Any insight (or speculation) would be much appreciated.
Intermediate & Advanced SEO | | mothner1 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
How to make Google index your site? (Blocked with robots.txt for a long time)
The problem is the for the long time we had a website m.imones.lt but it was blocked with robots.txt.
Intermediate & Advanced SEO | | FCRMediaLietuva
But after a long time we want Google to index it. We unblocked it 1 week or 8 days ago. But Google still does not recognize it. I type site:m.imones.lt and it says it is still blocked with robots.txt What should be the process to make Google crawl this mobile version faster? Thanks!0 -
New Website Look/Structure - Should I Redirect or Update Pages w/ Quality Inbound Links
This questing is regarding an ecommerce website that I hand wrote(html) in 1997. One of the first click and buy websites, with cart/admin system that I also developed. After all this time, the Old plain HTML look just doesnt cut it. I just updated to XHTML w/ a very modern look, and believe the structured data will index better. All products and current category pages will have the identical vrls taken from the old version. I decided to go with the switch after manual penalty, which has since been removed... I figured now is the time to update. My big question is that over the years, a lot of my backlinks came from products/news that are either no longer relevant or just not available. The pages do exist, but can only be found from the Outbound Link Source. For SEO purposes, I have thought a few things I can do but can't decide which one is the best choice. Any Insight or suggestions would be Awesome! 1. Redirect the old link to the most relevant page in my current catalog. 2. Add my new header/footer to old page(this will add a navigation bar w/ brands/cats/etc) 3. Simply add a nice new image to the top of these pages linking home & update any broken/irrelevant links. I was also considering adding just the very top 2 inches of my header(logo,search box, phone, address) *note, some of these pages do receive some traffic. Nothing huge, but consider the 50+ pages, it ads up.
Intermediate & Advanced SEO | | Southbay_Carnivorous_Plants0 -
Duplicate Content http://www.website.com and http://website.com
I'm getting duplicate content warnings for my site because the same pages are getting crawled twice? Once with http://www.website.com and once with http://website.com. I'm assuming this is a .htaccess problem so I'll post what mine looks like. I think installing WordPress in the root domain changed some of the settings I had before. My main site is primarily in HTML with a blog at http://www.website.com/blog/post-name BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
Intermediate & Advanced SEO | | thirdseo
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress0 -
Blog links - follow or nofollow?
I need my memory refreshed here! Say, I've got a blog and some of the posts have links to recommended external sites and content. Should these be nofollowed? They're not paid links or anything like that, simply things relevant to the post.
Intermediate & Advanced SEO | | PeterAlexLeigh0 -
Search Engine Blocked by robots.txt for Dynamic URLs
Today, I was checking crawl diagnostics for my website. I found warning for search engine blocked by robots.txt I have added following syntax to robots.txt file for all dynamic URLs. Disallow: /*?osCsid Disallow: /*?q= Disallow: /*?dir= Disallow: /*?p= Disallow: /*?limit= Disallow: /*review-form Dynamic URLs are as follow. http://www.vistastores.com/bar-stools?dir=desc&order=position http://www.vistastores.com/bathroom-lighting?p=2 and many more... So, Why should it shows me warning for this? Does it really matter or any other solution for these kind of dynamic URLs.
Intermediate & Advanced SEO | | CommercePundit0