Linking to unrelated content
-
Hi,
Just wanted to know, linking to unrelated content will harm the site? I know linking to unrelated content is not good. But wanted to know weather any chances are there or not. I have a site related to health and the other one related to technology. The technology site is too good having PR 6 and very good strong backlinks. And the health related site has very much tough competition, So i wanted to know may be i could link this health site to technology site to get good link from it. Can you suggest me about it.
waiting for your replies...
-
I think that it you have a lot of these links that you are painting a target on your back.
-
i dont think you will have any problems, try and find a related line to place the link in the content somewhere, technology and health would have some sort of cross over. If you looked both sites over i am sure you could find 2 pages with some relevancy. Either way i dont think you will do yourself any harm
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
On our site by mistake some wrong links were entered and google crawled them. We have fixed those links. But they still show up in Not Found Errors. Should we just mark them as fixed? Or what is the best way to deal with them?
Some parameter was not sent. So the link was read as : null/city, null/country instead cityname/city
Technical SEO | | Lybrate06060 -
Finding a specific link - Duplicating my own content
Hi Mozzers, This may be a bit of a n00b question and i feel i should know the answer but alas, here i am asking. I have a page www.website.co.uk/page/ and im getting a duplicate page report of www.website.co.uk/Page/ i know this is because somewhere on my website a link will exists using the capitalised version. I have tried everything i can think of to find it but with no luck, any little tricks? I could always rewrite the urls to lowercase, but I have downloadable software etc also on the website that i dont want to take the capitals out of. So the best solution seems to be finding the link and remove it. Most link checkers I use treat the capitalised and non capitalised as the same thing so really arent helping lol.
Technical SEO | | ATP0 -
No crawl code for pages of helpful links vs. no follow code on each link?
Our college website has many "owners" who want pages of "helpful links" resulting in a large number of outbound links. If we add code to the pages to prevent them from being crawled, will that be just as effective as making every individual link no follow?
Technical SEO | | LAJN0 -
Canonical Link Quesiton
I wrote an article that is a page article, but would also be a very good blog post - So my question is two things: 1. If i post it as a static page and syndicate it as a blog post and have it as a canonical link to the page, google will read see the blog and read the page _url as the one with credit correct? In turn not dinging me for duplicate content. 2. Given if the above statement is correct, should I write the blog and put it on my static page referencing the blog or the way i have it as a static page with the blog using a canonical reference back to the page. Any input would be greatly appreciated.
Technical SEO | | tgr0ss0 -
Internal linking with Old Content
Hello, I have a sports website in which users write their opinions about the sporting events that take place every day throughout the year. Each of these sporting events generates a new page or URL indicating the match with date. For example: www.domain.com/baseball/boston-v-yankees-04-24-2012-1234.html The teams face several times a year, and each match creates a different URL or page. I would like to link old pages to new pages and vice versa. How would you recommend these pages to be linked? Linking them to each other or linking old pages to new pages that are generated or otherwise? I would appreciate your orientation and help in this case. Thank you.
Technical SEO | | NorbertoMM1 -
Content Delivery Network
Anyone have a good reference for implementing a content delivery network? Any SEO pitfalls with using a CDN (brief research seems to indicate no problems)? I seem to recall that SEOmoz was using Amazon Web Services (AWS) for CDN. Is that still the case? All CDN & AWS experiences, advice, references welcomed!
Technical SEO | | Gyi0 -
Is this a good link?
Found a .gov link to my website www.kars4kids.org. The url it links to is http://www.nyc.gov/cgi-bin/exit.pl?url=http://www.kars4kids.org/ which does eventually redirect to kars4kids. Will search engines see this as a link?
Technical SEO | | Morris770 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0