Do the back-links go wasted when anchor text or context content doesn't match with page content?
-
Hi Community,
I have seen number of back-links where the content in that link is not matching with page content. Like page A linking to page B, but content is not really relevant beside brand name. Like page with "vertigo tiles" linked to page about "vertigo paints" where "vertigo" is brand name. Will these kind of back-links completely get wasted?
I have also found some broken links which I'm planning to redirect to existing pages just to reclaim the back-links even though the content relevancy is not much beside brand name. Are these back-links are beneficial or not?
Thanks
-
Hi VT,
Keep in mind that when page A is linking page B (either internally or externally) it is the first link to page B that shows up in the html of Page A that lends any link juice to page B. The rest don't count for anything as far as SEO. Often that link is in the menu or breadcrumbs, rather than in the page copy. That said, while it's nice to keep the anchor text to internal resources tight, the ultimate impact of that anchor text on internal resources isn't that great.
-
Hi vtmoz
In my opinion, anchor text is not so important in these days, brand / domain / image / visit our website are good anchors. What it's really important is to have some of the links with the exact or similar anchor of the keywords you want to rank, and the rest of natural anchors.
About the page content or context content, the more related it is to the linked page, better. If the content relation is very low, it could still be a good link depending on the reputation of the linking website. The more reputation the linking site has, you need less content relation from it to be a "good" link.
Obviously, the link will be better where it has more content relation and the linking site has more reputation.
About the broken links you are planning to redirect, I would have to see the links, but unless they are backlinks from spammy sites, they should not harm your SEO. Maybe they just have little positive impact, but you don't have nothing to lose doing this.
Hope that helps, best wishes!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel canonical on other page instead of duplicate page. How Google responds?
Hi all, We have 3 pages for same topics. We decided to use rel canonical and remove old pages from search to avoid duplicate content. Out of these 3 pages....1 and 2 type of pages have more similar content where 3 type don't have. Generally we must use rel canonical between 1 and 2. But I am wondering what happens if I canonical between 1 and 3 while 2 has more similar content? Will Google respects it or penalise as we left the most similar page and used other page for canonical. Thanks
Algorithm Updates | | vtmoz0 -
SEO Myth-Busters -- Isn't there a "duplicate content" penalty by another name here?
Where is that guy with the mustache in the funny hat and the geek when you truly need them? So SEL (SearchEngineLand) said recently that there's no such thing as "duplicate content" penalties. http://searchengineland.com/myth-duplicate-content-penalty-259657 by the way, I'd love to get Rand or Eric or others Mozzers aka TAGFEE'ers to weigh in here on this if possible. The reason for this question is to double check a possible 'duplicate content" type penalty (possibly by another name?) that might accrue in the following situation. 1 - Assume a domain has a 30 Domain Authority (per OSE) 2 - The site on the current domain has about 100 pages - all hand coded. Things do very well in SEO because we designed it to do so.... The site is about 6 years in the current incarnation, with a very simple e-commerce cart (again basically hand coded). I will not name the site for obvious reasons. 3 - Business is good. We're upgrading to a new CMS. (hooray!) In doing so we are implementing categories and faceted search (with plans to try to keep the site to under 100 new "pages" using a combination of rel canonical and noindex. I will also not name the CMS for obvious reasons. In simple terms, as the site is built out and launched in the next 60 - 90 days, and assume we have 500 products and 100 categories, that yields at least 50,000 pages - and with other aspects of the faceted search, it could create easily 10X that many pages. 4 - in ScreamingFrog tests of the DEV site, it is quite evident that there are many tens of thousands of unique urls that are basically the textbook illustration of a duplicate content nightmare. ScreamingFrog has also been known to crash while spidering, and we've discovered thousands of URLS of live sites using the same CMS. There is no question that spiders are somehow triggering some sort of infinite page generation - and we can see that both on our DEV site as well as out in the wild (in Google's Supplemental Index). 5 - Since there is no "duplicate content penalty" and there never was - are there other risks here that are caused by infinite page generation?? Like burning up a theoretical "crawl budget" or having the bots miss pages or other negative consequences? 6 - Is it also possible that bumping a site that ranks well for 100 pages up to 10,000 pages or more might very well have a linkuice penalty as a result of all this (honest but inadvertent) duplicate content? In otherwords, is inbound linkjuice and ranking power essentially divided by the number of pages on a site? Sure, it may be some what mediated by internal page linkjuice, but what's are the actual big-dog issues here? So has SEL's "duplicate content myth" truly been myth-busted in this particular situation? ??? Thanks a million! 200.gif#12
Algorithm Updates | | seo_plus0 -
Absolute Internal Links and their Affect on Rankings for Local Businesses
For many localized search terms (City + Profession), I'm noticing that OpenSiteExplorer is reporting high numbers of backlinks for anchortext from Absolute internal links from that same domain. Now, I'm familiar with the difference in relative and absolute internal links but am wondering if this type of linking may be carrying a lot of weight for rankings. It may be a correlation over causation situation if it's more that those companies that are including absolute internal links for terminology are just doing a better job (generally) with internal linking... but I feel like this may be something worth digging into. Everything I have read says that search engines view absolute and relative internal links essentially equally but does anybody have their own insight on the effectiveness for these two types of internal links in regards to small local businesses who otherwise are getting basically no links at all?
Algorithm Updates | | Kirch0 -
Why do I have 7 URLs from the same domain ranking on the 1st page?
I have a client that has individual pages for authorized dealers of their product (say "Car Dealers"). When you search for "brand name + location", Google returns 7 "dealership" pages from the parent company's domain as the first 7 results, but there is one that gets pushed off to the 5th page of the SERPs. The formatting of content, geo-targeting, and meta data on the page is identical on every single one. None of them have external links and there is not one extremely distinguishable thing to assess why the one page doesn't get placed on that first SERP. Why is the one getting pushed so far down? I know this may be a bit confusing, but any thoughts would be greatly appreciated. Thanks!
Algorithm Updates | | MichaelWeisbaum0 -
Guides to determine if a client's website has been penalized?
Has anyone come across any great guides to pair with client data to help you determine if their website has been penalized? I'm also not talking about an obvious drop in traffic/rankings, but I want to know if there's a guide out there for detecting the subtleties that may be found in a client's website data. One that also helps you take into account all the different variables that may not be related to the engines. Thanks!
Algorithm Updates | | EEE30 -
If Google doesn’t know we’re hosted in the UK, does that affect our SERPs?
Hi, In November 2011 our eCommerce website dropped from between 3rd and 4th position in the UK SERPs down to 7th and 8th. A year after this happened, we still haven’t moved back up to the original ranking despite all our best efforts and we’re looking for a bit of insight into what could have happened. One of our theories is this, do you think it might be the problem? In October 2011 we moved from a single-site custom built CMS hosted in the UK to a multi-site custom built CMS hosted on a much better server based in the UK. As part of this move we started using CloudFlare to help with security and performance (CloudFlare is a security CDN). Because CloudFlare’s servers are in the US, to the outside world it almost looks like we went from a slow hosting company in the UK to a much quicker hosting company in the US. Could this have affected our rankings? We know that Google takes the server IP address into account as a ranking factor, but as far as we understand it’s because they (rightly) believe that a server closer to the user will perform better. So a UK server will serve up pages quicker to a visitor in the UK than a US server because the data has a shorter distance to travel. However, we’re definitely not experiencing an issue with being recognised as a UK website. We have a .co.uk domain (which is obviously a big indicator) and if you click on “Pages from the UK” in the SERPs we jump up to 3rd place. So Google seems to know we’re a UK site. Is the fact we’re using CloudFlare and hence hiding our real server IP address – is this penalising us in the SERPs? Currently out of the 6 websites above us, 4 are in the US and 2 are in the UK. All of these are massive sites with lots of links, so smaller ranking factors might be more important for us. Obviously the big downside of not using CloudFlare is that our site becomes much less secure and it becomes much slower. Images and some static content is distributed via a local CloudFlare server, which means it should tick Google’s box in terms of providing a quick site for users. CloudFlare say in a blog post that they used to have Google crawl rates and geo-tagging issues in the past when they were just starting out, but in 2010 they started working with “the big search engines” to make sure they treated CloudFlare like a CDN (so special rules that apply to Akamai also apply to CloudFlare). Since they’ve been working with Google, CloudFlare say that their customers will only see a positive SEO impact. So at the moment we’re at a loss about what happened to our ranking. Google say they take IP’s into account for ranking, but by using CloudFlare it looks like we’re in the US. We definitely know we’re not having geo-tagging issues and CloudFlare say they’re working with Google to ensure its customers aren't seeing a negative impact by using CloudFlare, but a niggling part of us still wonders whether it could impact our SEO. Many thanks, James
Algorithm Updates | | OptiBacUK0 -
Too Many Non-Niche-Specific Links?
Something just occurred to me today. I work in-house for an embroidered patch company, but I respond to a lot of HARO queries about Marketing, SEO, SEM, Web Design, ect. So, we have a lot of links from these types of sites. Additionally, I have done guest blogs on these topics because those are what I'm knowledgeable about. We also have links from customers' personal blogs or websites stating they got their patches from us and are happy, blah, blah, blah. On top of that, we hired someone who ended up getting tons of .edu links by spamming blogs. Oy. I'd estimate only about 10% of our links come from embroidery, sewing, screen printing, promotional products, etc types of sites. I guess it's not really known or documented how much weight Google places on niche-specific links--we just assume that it matters, and I'm sure it does. Our rankings are fine now, but I'm looking for some opinions from other SEOs about how much they think this will matter in the future or how much it matters now. Could this hurt us in the future? .
Algorithm Updates | | UnderRugSwept0 -
Regarding google panda: would it be wise to use automatic generated content when there is no content.
Hi guys, i am currently creating a local business directory and was deciding when we first start there will be a lot of business listings without a business decription until the owner of that business come to submit a description. so when if a business listing have no business description would it be better to have an automatic generated business description like this:
Algorithm Updates | | usaccess608
www.startlocal.com.au/retail/books/tas_hobartandsouth/Scene_Magazine_2797040.html the automated genrated description for this listing on that page is:
Scene Magazine is a business that is based in Kingston, 7050, TAS: Hobart And South. Scene Magazine is listed in 2 categories including: Magazines and Periodicals Shops and Book Stores and Shops. Within the Magazines and Periodicals Shops category there are 5 businesses within 25 km of Scene Magazine. Some of those businesses included within the radius of 25 km are Island Magazine, Artemis Publishing Consultants and Bride Tasmania Magazine. would google panda affect this or not and would it be wise to use this auto content when there is no description for a business?0