Reciprocal Links and nofollow/noindex/robots.txt
-
Hypothetical Situations:
- You get a guest post on another blog and it offers a great link back to your website. You want to tell your readers about it, but linking the post will turn that link into a reciprocal link instead of a one way link, which presumably has more value. Should you nofollow your link to the guest post?
My intuition here, and the answer that I expect, is that if it's good for users, the link belongs there, and as such there is no trouble with linking to the post. Is this the right way to think about it? Would grey hats agree?
- You're working for a small local business and you want to explore some reciprocal link opportunities with other companies in your niche using a "links" page you created on your domain. You decide to get sneaky and either noindex your links page, block the links page with robots.txt, or nofollow the links on the page. What is the best practice?
My intuition here, and the answer that I expect, is that this would be a sneaky practice, and could lead to bad blood with the people you're exchanging links with. Would these tactics even be effective in turning a reciprocal link into a one-way link if you could overlook the potential immorality of the practice? Would grey hats agree?
-
-
Yes, your link back to the other site is in good faith and good for readers. If you don't do it too much, you shouldn't get dinged for recip linking.
-
About 4 or 5 years ago I used to see sites do this, usually using the robots.txt file to exclude spidering ot their links page. i don't know if it;'s the "best practice" but it seems robots,txt was used more often than noindex on the page.
It's a sleazy thing to do and yes, it can cause bad blood with your link partners. I know because on more than one occasion I informed sites about that practice being used on them, and they removed their outbound links and thanked me for pointing out how they were being played for chumps.
-
-
Thanks, Ryan. I appreciate the answers, especially for the second question. Link exchanges aren't really my style as far as link building is concerned, but it kind of popped into my head as a result of the first question, so I figured I'd throw it out there. Thanks for the responses!
-
Hi Anthony.
Your first question asks how to inform your site's readers about a blog article you created on another site, without negatively impacting the link juice you are receiving from the article (i.e. creating a reciprocal link).
One possibility is mentioning the article without linking to it. "Check out my article on Grey Hat SEO at the SEOmoz site". Another method along the same lines is to use this same practice and specifically mention the article without linking to it: http://www.seomoz.org/grey-hat-seo (fictitious link). Since there is no actual link, you do not need to add nofollow and no link juice is lost.
You can also tweet the link or post it on facebook or another social sharing site. If you show your tweets on your site, this tactic would not be as productive due to the reciprocal link which you were trying to avoid being created.
You can also get creative: "Check out my new article on Grey Hat SEO tactics. It ranks #1 in Google! Click here to see" and then you provide a link to Google which shows the search results. Your reader would presumably click that result and you not only send the user to your article, but also send some positive signals to Google at the same time.
As for your second question, "How can I backstab my linking partners and get away with it?", blocking the page with robots.txt would work, but it disrupts the flow of link juice throughout your site. Adding the noindex tag to the page is preferable but also more obvious to your linking partners. Adding the nofollow tag to all the links will cost you a lot of link juice. Another method would be to present the links in a properly constructed iframe which Google does not crawl. May I just add I hate strongly dislike this type of question?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Navigation Bar have an effect on the link juice and the number of internal links?
Hi Moz community, I am getting the "Avoid Too Many Internal Links" error from Moz for most of my pages and Google declared the max number as 100 internal links. However, most of my pages can't have internal links less than 100, since it is a commercial website and there are many categories that I have to show to my visitors by using the drop down navigation bar. Without counting the links in the navigation bar, the number of internal links is below 100. I am wondering if the navigation bar links affect the link juice and counted as internal links by Google. The Same question also applies to the links in the footer. Additionally, how about the products? I have hundreds of products in the category pages and even though I use pagination I still have many links in the category pages (probably more than 100 without even counting the navigation bar links). Does Google count the product links as internal links and how about the effect on the link juice? Here is the website if you want to take a look: http://www.goldstore.com.tr Thank you for your answers.
Intermediate & Advanced SEO | | onurcan-ikiz0 -
Duplicate content on .com .au and .de/europe/en. Would it be wise to move to .com?
This is the scenario: A webstore has evolved into 7 sites in 3 shops: example.com/northamerica example.de/europe example.de/europe/en example.de/europe/fr example.de/europe/es example.de/europe /it example.com.au .com/northamerica .de/europe/en and .com.au all have mostly the same content on them (all 3 are in english). What would be the best way to avoid duplicate content? An answer would be very much appreciated!
Intermediate & Advanced SEO | | SEO-Bas0 -
Https://www.mywebsite.com/blog/tag/wolf/ setting tag pages as blog corner stone article?
We do not have enough content rich page to target all of our keywords. Because of that My SEO guy wants to set some corner stone blog articles in order to rank them for certain key words on Google. He is asking me to use the following rule in our article writing(We have blog on our website):
Intermediate & Advanced SEO | | AlirezaHamidian
For example in our articles when we use keyword "wolf", link them to the blog page:
https://www.mywebsite.com/blog/tag/wolf/
It seems like a good idea because in the tag page there are lots of material with the Keyword "wolf" . But the problem is when I search for keyword "wolf" for example on the Google, some other blog pages are ranked higher than this tag page. But he tells me in long run it is a better strategy. Any idea on this?0 -
Add or not add "nofollow" to duplicate internal links?
Hello everyone. I have searched on these forums for an answer to my concerns, and despite I found many discussions and questions about applying or not applying "nofollow" to internal links, I couldn't find an answer specific to my particular scenarios. Here is my first scenario: I have an e-commerce site selling digital sheet music, and on my category pages our products are shown typically with the following format: PRODUCT TITLE link that takes to product page Short description text "more info" link that takes to the same product page again As you may notice, the "more info" link takes at the very same page of the PRODUCT TITLE link. So, my question is: is there any benefit to "nofollow" the "more info" link to tell SEs to "ignore" that link? Or should I leave the way it is and let the SE figure it out? My biggest concern by leaving the "nofollow" out is that the "more info" generic and repetitive anchor text could dilute or "compete" with the keyword content of the PRODUCT TITLE anchor text.... but maybe that doesn't really matter! Here a typical category page from my site; http://www.virtualsheetmusic.com/downloads/Indici/Guitar.html My second scenario: on our product pages, we have several different links that take to the very same "preview page" of the product we sell. Each link has a different anchor text, and some other links are just images, all taking to the same page. Here are the anchor texts or ALT text of such same links: "Download Free Sample" (text link) "Cover of the [product title]" (ALT image text) "Look inside this title" (ALT image text) "[product title] PDF file" (ALT image text) "This item contains one high quality PDF sheet music file ready to download and print." (ALT image text) "PDF" (text link) "[product title] PDF file" (ALT image text) So, I have 7 links on the same product page taking the user to the same "product preview page" which is, by the way, canonicalized to the "main" product page we are talking about. Here is an example of product page on my site: http://www.virtualsheetmusic.com/score/Moonlight.html My instinct is to tell SEs to take into account just the links with the "[product title] PDF file" anchor text, and then add a "nofollow" to the other links... but may that hurting in some way? Is that irrelevant? Doesn't matter? How should I move? Just ignore this issue and let the SEs figure it out? Any thoughts are very welcome! Thank you in advance.
Intermediate & Advanced SEO | | fablau0 -
PR links
Its seems that at lot of or competitors are using PR site to place articles with links. They are using the same article across many sites with the same anchor text link - But they seem to be doing very well in the rankings.... I have steered away from this type of linking as I assumed Google wouldn't be keen on this type of activity but I seem to be wrong.... Any views on this?
Intermediate & Advanced SEO | | jj34340 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
How to ping the links
When i do link building for my website, how can i let the search engines know about that. is there any way of pinging?
Intermediate & Advanced SEO | | raybiswa0 -
Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet. #Dynamic URLs Disallow: /?osCsidDisallow: /?q= Disallow: /?dir=Disallow: /?p= Disallow: /*?limit= Disallow: /*review-form I have concern for following kind of pages. Shorting by specification: http://www.vistastores.com/table-lamps?dir=asc&order=name Iterms per page: http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name Numbering page of products: http://www.vistastores.com/table-lamps?p=2 Will it create resistance in organic performance of my category pages?
Intermediate & Advanced SEO | | CommercePundit0