Reciprocal Links and nofollow/noindex/robots.txt
-
Hypothetical Situations:
- You get a guest post on another blog and it offers a great link back to your website. You want to tell your readers about it, but linking the post will turn that link into a reciprocal link instead of a one way link, which presumably has more value. Should you nofollow your link to the guest post?
My intuition here, and the answer that I expect, is that if it's good for users, the link belongs there, and as such there is no trouble with linking to the post. Is this the right way to think about it? Would grey hats agree?
- You're working for a small local business and you want to explore some reciprocal link opportunities with other companies in your niche using a "links" page you created on your domain. You decide to get sneaky and either noindex your links page, block the links page with robots.txt, or nofollow the links on the page. What is the best practice?
My intuition here, and the answer that I expect, is that this would be a sneaky practice, and could lead to bad blood with the people you're exchanging links with. Would these tactics even be effective in turning a reciprocal link into a one-way link if you could overlook the potential immorality of the practice? Would grey hats agree?
-
-
Yes, your link back to the other site is in good faith and good for readers. If you don't do it too much, you shouldn't get dinged for recip linking.
-
About 4 or 5 years ago I used to see sites do this, usually using the robots.txt file to exclude spidering ot their links page. i don't know if it;'s the "best practice" but it seems robots,txt was used more often than noindex on the page.
It's a sleazy thing to do and yes, it can cause bad blood with your link partners. I know because on more than one occasion I informed sites about that practice being used on them, and they removed their outbound links and thanked me for pointing out how they were being played for chumps.
-
-
Thanks, Ryan. I appreciate the answers, especially for the second question. Link exchanges aren't really my style as far as link building is concerned, but it kind of popped into my head as a result of the first question, so I figured I'd throw it out there. Thanks for the responses!
-
Hi Anthony.
Your first question asks how to inform your site's readers about a blog article you created on another site, without negatively impacting the link juice you are receiving from the article (i.e. creating a reciprocal link).
One possibility is mentioning the article without linking to it. "Check out my article on Grey Hat SEO at the SEOmoz site". Another method along the same lines is to use this same practice and specifically mention the article without linking to it: http://www.seomoz.org/grey-hat-seo (fictitious link). Since there is no actual link, you do not need to add nofollow and no link juice is lost.
You can also tweet the link or post it on facebook or another social sharing site. If you show your tweets on your site, this tactic would not be as productive due to the reciprocal link which you were trying to avoid being created.
You can also get creative: "Check out my new article on Grey Hat SEO tactics. It ranks #1 in Google! Click here to see" and then you provide a link to Google which shows the search results. Your reader would presumably click that result and you not only send the user to your article, but also send some positive signals to Google at the same time.
As for your second question, "How can I backstab my linking partners and get away with it?", blocking the page with robots.txt would work, but it disrupts the flow of link juice throughout your site. Adding the noindex tag to the page is preferable but also more obvious to your linking partners. Adding the nofollow tag to all the links will cost you a lot of link juice. Another method would be to present the links in a properly constructed iframe which Google does not crawl. May I just add I hate strongly dislike this type of question?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Phone number link / critical crawler issue
I've got 15 critical crawler issues coming up, all of which are ( tel: )links to the contact phone number. As this is a taxi firm, these links are pretty vital to customer conversion. Should I worry about these issues from an SEO perspective? If so, is there anything I can do about it?
Intermediate & Advanced SEO | | Paul7300 -
How does Googlebot evaluate performance/page speed on Isomorphic/Single Page Applications?
I'm curious how Google evaluates pagespeed for SPAs. Initial payloads are inherently large (resulting in 5+ second load times), but subsequent requests are lightning fast, as these requests are handled by JS fetching data from the backend. Does Google evaluate pages on a URL-by-URL basis, looking at the initial payload (and "slow"-ish load time) for each? Or do they load the initial JS+HTML and then continue to crawl from there? Another way of putting it: is Googlebot essentially "refreshing" for each page and therefore associating each URL with a higher load time? Or will pages that are crawled after the initial payload benefit from the speedier load time? Any insight (or speculation) would be much appreciated.
Intermediate & Advanced SEO | | mothner1 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Help with https// redirects
Hey there
Intermediate & Advanced SEO | | Jay328
I have a client who just moved from a self hosted CMS to Adobe Catalyst (don't ask!)
The problem: Their url indexed with google is https://domain.com, Adobe Catalyst does not support third party SSL certificates or https domains. Now when people google them https://domain.com shows up in search, HOWEVER it does not have a trusted certificate and a pop up window blocks the site. They are a mortgage company so SSL is really not needed. What can I do to get google to recognize the site at http: vs. https? Would this be something in GWMT? Thanks!0 -
Duplicate internal links on page, any benefit to nofollow
Link spam is naturally a hot topic amongst SEO's, particularly post Penguin. While digging around forums etc, I watched a video blog from Matt Cutts posted a while ago that suggests that Google only pays attention to the first instance of a link on the page As most websites will have multiple instances of a links (header, footer and body text), is it beneficial to nofollow the additional instances of the link? Also as the first instance of a link will in most cases be within the header nav, does that then make the content link text critical or can good on page optimisation be pulled from the title attribute? I would appreciate the experiences and thoughts Mozzers thoughts on this thanks in advance!
Intermediate & Advanced SEO | | JustinTaylor880 -
How to Define rel=nofollow Attribute for External Links?
I want to define rel=nofollow attribute for Vista Patio Umbrellas. I have designed narrow by search section on home page. I want to define rel=nofollow attribute for all text links which are available in left navigation. So, what is best solution for that?
Intermediate & Advanced SEO | | CommercePundit0 -
Link Juice / Java pop up
Hi all I am a bit unsure of something and would appreciate it if someone could clarify (without the sad trombone hinting that my question is stupid like the last time i asked a question) Our Newsletter was recently posted on a website and i am not sure if the link pointing back is actually passing link juice. When clicking the link, a Java pop up box appears saying "click here to go to authors site" I am wondering if this was implemented to avoid google passing its juice? Or if google can index the pop up and give us credit for the link? Please have a look at the article, and let me know what you guys think? http://www.bestholidaynews.com/adventure-and-activities/africa/our-top-3-overlanding-egypt-trips-2.html Thanks in advance Regards Greg
Intermediate & Advanced SEO | | AndreVanKets0