Page for Link Building
-
Hello guys,
My question is about link building and reciprocal links. Since many directories request a reciprocal link, makes me wonder if is not better to create a unique page in the website only for this kind of links.
What do you guys recommend?
Thanks in advance,
PP
-
Hi Pedro,
Please go through Google link scheme guidelines:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66356
In above guidelines they have clearly mention that "Links from low quality directory sites is unnatural".
So its better to stay away from reciprocal directory sites.
Hope this help you out....
-
Totally agree - stay away from them now!
best,
-
To be honest, reciprocal linking is dead now so I wouldn't recommend obtaining links from any directories that require you to link back to them. Not only does Google consider reciprocal linking to be worthless anyway but you could potentially land yourself in hot water when the directory invariably gets devalued by Google and you’re actively linking to them.
Links pages used to be all the rage back in the late 1990’s but I don’t think they have their place anymore.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are image pages considered 'thin' content pages?
I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.
Technical SEO | | MTalhaImtiaz0 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect?
If I want clean up my URLs and take the "www.site.com/page.html" and make it "www.site.com/page" do I need a redirect? If this scenario requires a 301 redirect no matter what, I might as well update the URL to be a little more keyword rich for the page while I'm at it. However, since these pages are ranking well I'd rather not lose any authority in the process and keep the URL just stripped of the ".html" (if that's possible). Thanks for you help! [edited for formatting]
Technical SEO | | Booj0 -
Too many links?
Hello! I've just started with SEOmoz, and am getting an error about too many links on a few of my blog posts - it's pages with high numbers of comments, and the links are coming from each commenter's profile (hopefully that makes sense they're not just random stuffed links). Is there a way to help this not cause a problem? Thanks!
Technical SEO | | PaulineMagnusson0 -
How can I prevent duplicate content between www.page.com/ and www.page.com
SEOMoz's recent crawl showed me that I had an error for duplicate content and duplicate page titles. This is a problem because it found the same page twice because of a '/' on the end of one url. e.g. www.page.com/ vs. www.page.com My question is do I need to be concerned about this. And is there anything I should put in my htaccess file to prevent this happening. Thanks!
Technical SEO | | onlineexpression
Karl0 -
A Puzzling Link
I'm stumped and I'm hoping some mozzers will be able to help. I run our company blog (http://scottymacblog.com/). The last couple of days I have noticed that the blog is receiving some traffic from cnn.com. I looked, but cannot find any mention of the blog on cnn. Adding to my frustration is that the content on cnn is constantly changing. Our blog doesn't do any sort of advertising and no one affiliated with the blog posts on cnn. As great as it is to be getting traffic from such a valued source, I have no idea why. Has something like this happened to (for?) anyone else? Any ideas on how I can research the source of the link? Thanks in advance!
Technical SEO | | EssEEmily0 -
Deep Page Link - url no longer exists
I used Open Site Explorer and found a link to our site on http://www.business.com/guides/bedding-supplies-3639/ The link was setup to go to an important, deep page on my website, but the structure of our urls changed and the url no longer exists. The link (anchor text 'National Hospitality Supply') does direct to our homepage, www.nathosp.com. My question is, am I receiving full link juice? Or would I be better served to create a 301 redirect to the revised / new page url? In case it matters, if I had my choice I'd prefer the link to go to the intended deep page. Thanks in advance for your insight. -Josh Fulfer
Technical SEO | | mhans0 -
Mask links with JS that point to noindex'ed paged
Hi, in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content. We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements. Thanks, Sebastian
Technical SEO | | derderko0