Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Where is the best place to put reciprocal links on our website?
-
Where should reciprocal links be placed on our website? Should we create a "Resources" page? Should the page be "hidden" from the public? I know there is a right answer out there! Thank you for your help!
Jay
-
Oops. I only just noticed the date on this question! Sorry folks...
-
Agreed. However, it might be worth adding that reciprocal links can also look natural but only if they form a small part of your link profile.
Whatever you do, Jay, dont add a page called links or resources. Make sure the links are contextual links in the content of article's or content.
My way around doing this is putting them in the testimonials pages on my sites

A good example would be when I managed to get a link from Sky.com - in return they requested a link back to their site and I would be silly not to have provided one. I didn't want the link for the juice, I wanted it for the click through's. The reason I'm saying this is to show that not all reciprocal links are seen as un-natural.
-
Good stuff....thank you for the awesome advice! I'll heed it.
-
Generally speaking, I would recommend that you do NOT copy these links from your competitor. The optimal way to use an analysis of a competitor's backlink profile is so you can copy the good links and leave the bad ones.
If you want to search your competitor's site for these links, you can try using the site: operator in Google, or perhaps the specific site's own search box.
There are forms of quality reciprocal links. They can be found on a "Resources" type of page on your site. If your site focuses on health topics you may link to the Mayo Clinic, the National Institute of Health, and other quality sites. The purpose of these links are to help your readers locate quality information. Some relevant sites may link to you as well and these links could be reciprocal which is fine.
If you have a "Resources" page where you provide links with perfect anchor text (i.e. "best real estate agent") to sites which are not relevant to yours, that is a rather obvious attempt to manipulate search engine rankings. The links you provide will offer little to no value, as well as the links you will receive. You need to give search engines a lot more credit. You are spending time and effort on what frankly amounts to bullsh*t SEO.
Investigate all your competitor's links. Look for their best links and investigate them. You want ideas, not to copy cat. The best you can do by copying your competitors is to eventually catch up to them. Provide better content, focus better keywords, be more current and relevant, be more authoritative and then you will gain links that a competitor can't copy...because the links you gain have to be earned.
-
While I generally agree with Alan above, if you're going to do recip links, I'd try to work them into the context of your site on different pages.
-
Hey Ryan. I found a bunch of links through the SEOMoz link tool analyzer that our competitors have. When I visited the link sources, all of them required a reciprocal link to be placed on our website. However, I notice that none of our competitors have a "public link page" where these reciprocals might be. Therein was my question... Jay
-
Can you share more details about these reciprocal links? What is the purpose of placing them on your site?
-
i would not get reciprocal links, Search engines look for un-natural patterns of linking, although they happan natrualy somtimes, SE's can see not only your pattern but those you have reciprocal links with.
But having said that, you are on the right track, link out on a page with low PR, include a load of links back to your own site so that you only give away a small percenatge of link juice.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
NO Meta description pulling through in SERP with react website - Requesting Indexing & Submitting to Google with no luck
Hi there, A year ago I launched a website using react, which has caused Google to not read my meta descriptions. I've submitted the sitemap and there was no change in the SERP. Then, I tried "Fetch and Render" and request indexing for the homepage, which did work, however I have over 300 pages and I can't do that for every one. I have requested a fetch, render and index for "this url and linked pages," and while Google's cache has updated, the SERP listing has not. I looked in the Index Coverage report for the new GSC and it says the urls and valid and indexable, and yet there's still no meta description. I realize that Google doesn't have to index all pages, and that Google may not also take your meta description, but I want to make sure I do my due diligence in making the website crawlable. My main questions are: If Google didn't reindex ANYTHING when I submitted the sitemap, what might be wrong with my sitemap? Is submitting each url manually bad, and if so, why? Am I simply jumping the gun since it's only been a week since I requested indexing for the main url and all the linked urls? Any other suggestions?
Web Design | | DigitalMarketingSEO1 -
What’s the best tool to visualize internal link structure and relationships between pages on a single site?
I‘d like to review the internal linking structure on my site. Is there a tool that can visualize the relationships between all of the pages within my site?
Web Design | | QBSEO0 -
Spanish website indexed in English, redirect to spanish or english version if i do a new website design?
Hi MOZ users, i have this problem. We have a website in Spanish Language but Google crawls it on English (it is not important the reasons). We re made the entire website and now we are planning the move. The new website will have different language versions, english, spanish and portuguese. Somebody tells me that we have to redirect the old urls (crawled on english) to the new english versions, not to the spanish (the real language of the firsts). Example: URL1 Language: Spanish - Crawled on English --> redirect to Language English version. the other option will be redirect to the spanish new version, which the visitor is waiting to find. URL1 Language: Spanish - Crawled on English --> redirect to Language Spanish version. What do you think? Which is the better option?
Web Design | | NachoRetta0 -
Duplicate content on websites for multiple countries
I have a client who has a website for their U.S. based customers. They are currently adding a Canadian dealer and would like a second website with much of the same info as their current website, but with Canadian contact info etc. What is the best way to do this without creating duplicate content that will get us penalized? If we create a website at ABCcompany.com and ABCCompany.ca or something like that, will that get us around the duplicate content penalty?
Web Design | | InvoqMarketing0 -
Too Many Outbound Links on the Home Page - Bad for SEO?
Hello Again Moz community, This is my last Q of the day: I have a LOT of outbound links on the home page of www.web3.ca Some are to clients projects, most are to other pages on the website. Can reducing this to the core pages have a positive impact on SEO? Thanks, Anton
Web Design | | Web3Marketing870 -
Multi-page articles, pagination, best practice...
A couple months ago we mitigated a 12-year-old site -- about 2,000 pages -- to WordPress.
Web Design | | jmueller0823
The transition was smooth (301 redirects), we haven't lost much search juice. We have about 75 multi-page articles (posts); we're using a plugin (Organize Series) to manage the pagination. On the old site, all of the pages in the series had the same title. I've since heard this is not a good SEO practice (duplicate titles). The url's were the same too, with a 'number' (designating the page number) appended to the title text. Here's my question: 1. Is there a best practice for titles & url's of multi-page articles? Let's say we have an article named: 'This is an Article' ... What if I name the pages like this:
-- This is an Article, Page 1
-- This is an Article, Page 2
-- This is an Article, Page 3 Is that a good idea? Or, should each page have a completely different title? Does it matter?
** I think for usability, the examples above are best; they give the reader context. What about url's ? Are these a good idea? /this-is-an-article-01, /this-is-an-article-02, and so on...
Does it matter? 2. I've read that maybe multi-page articles are not such a good idea -- from usability and SEO standpoints. We tend to limit our articles to about 800 words per page. So, is it better to publish 'long' articles instead of multi-page? Does it matter? I think I'm seeing a trend on content sites toward long, one-page articles. 3. Any other gotchas we should be aware of, related to SEO/ multi-page? Long post... we've gone back-and-forth on this a couple times and need to get this settled.
Thanks much! Jim0 -
Best method to stop crawler access to extra Nav Menu
Our shop site has a 3 tier drop down mega-menu so it's easy to find your way to anything from anywhere. It contains about 150 links and probably 300 words of text. We also have a more context-driven single layer of sub-category navigation as well as breadcrumbs on our category pages. You can get to every product and category page without using the drop down mega-menu. Although the mega-menu is a helpful tool for customers, it means that every single page in our shop has an extra 150 links on it that go to stuff that isn't necessarily related or relevant to the page content. This means that when viewed from the context of a crawler, rather than a nice tree like crawling structure, we've got more of an unstructured mesh where everything is linked to everything else. I'd like to hide the mega-menu links from being picked up by a crawler, but what's the best way to do this? I can add a nofollow to all mega-menu links, but are the links still registered as page content even if they're not followed? It's a lot of text if nothing else. Another possibility we're considering is to set the mega-menu to only populate with links when it's main button is hovered over. So it's not part of the initial page load content at all. Or we could use a crude yet effective system we have used for some other menus we have of base encoding the content inline so it's not readable by a spider. What would you do and why? Thanks, James
Web Design | | DWJames0 -
Link Juice Passing Through Headers
I understand the concept of linking your pages internally to help pass juice to one another but it seems to me that the navigation bar with links to your main pages that appear on every page kind of eliminate the linking strategy. For Example: At the top of every page is a Home, About, Services, Contact, etc. Do the bots count these as links from each page? There must be something I'm missing here! Help me out guys!
Web Design | | bcarp880