Can I reduce number of on page links by just adding "no follow" tags to duplicate links
-
Our site works on templates and we essentially have a link pointing to the same place 3 times on most pages. The links are images not text.
We are over 100 links on our on page attributes, and ranking fairly well for key SERPS our core pages are optimized for.
I am thinking I should engage in some on-page link juice sculpting and add some "no follow" tags to 2 of the 3 repeated links.
Although that being said the Moz's on page optimizer is not saying I have link cannibalization.
Any thoughts guys? Hope this scenario makes sense.
-
Hey Alex,
Hit the "oak furniture" analogy on the head there. I could go through our core pages disabling the canalizing link. So the Oak furniture didn't work on the oak page because the user is already there.
The CMS is reliable and a little FUGLY I can't get a nice font to work and align how I want so resorted to images.
How much of an edge do you think we are missing because of this issue? can you quantify it?Essentially this is not going to boost usability massively, just sculpt link juice hopefully.
-
Do you mean keyword cannibalisation - where different pages are targeting the same keyword? You have numerous pages where the main key phrase in the title tag and elsewhere on-page includes "oak furniture" for example.
I don't have much experience with CMS's so apologies if my question sounds a bit dumb, is it not as simple as selecting a default size and font like Arial or Helvetica?
-
I agree Alex the problem comes with link cannibalization. As the site works from templates there is a lot of variation with how the text renders across browsers so from a user point of view and with all the issues with w3c validation images were the best way to go.
Its a toss up between the links looking ugly and link juice.
-
Again, if you think it will benefit the human user. I'm hesitant to give more specific advice when I haven't seen the site in question. You can always take advantage of image alt attributes if you're not already.
EDIT - if you're talking about The Furniture Market, the navigation links on the left and top would definitely be better as text.
-
From our home page we have all image links to our core pages, maybe I should add some exact match anchor text links to our core pages from there you mean?
It's an old CMS we are on but I still think I can squeeze some extra performance out of it.
-
Not necessarily - the 100 links is just a warning - not an error. I can't speak for SEOmoz but I think that number comes from the days when search engine crawlers weren't so sophisticated; the likes of Google have no problem crawling a lot of on page links now. If it would make it a better user experience to remove some links, then go ahead.
This is an old post - be aware that in most situations it's probably only the anchor text of the first repeated link that counts - http://www.seomoz.org/blog/results-of-google-experimentation-only-the-first-anchor-text-counts - so that's a possibility for improving your link optimisation. Another way to optimise your site using links is to be aware that links in body text have a higher value than those in sidebars and footers - it's a good way to link to other internal content.
-
Hey Alex,
So you would go for more of a 'link amputation' approach ?
-
I'd say it's not worth it these days; if you add nofollow to a link, the link juice disappears completely, e.g. if you have 10 followed links on a page (and 10 links in total), they'll each get 10% of the link juice. If you make 1 of those 10 links nofollow, the other 9 links will still only get the same amount of link juice - 10% will be lost completely to the nofollow. It used to be the case where the link juice was shared between the number of followed links, but not any more.
Edit - check out this from Matt Cutts: http://www.youtube.com/watch?v=bVOOB_Q0MZY
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can a duplicate page referencing the original page on another domain in another country using the 'canonical link' still get indexed locally?
Hi I wonder if anyone could help me on a canonical link query/indexing issue. I have given an overview, intended solution and question below. Any advice on this query will be much appreciated. Overview: I have a client who has a .com domain that includes blog content intended for the US market using the correct lang tags. The client also has a .co.uk site without a blog but looking at creating one. As the target keywords and content are relevant across both UK and US markets and not to duplicate work the client has asked would it be worthwhile centralising the blog or provide any other efficient blog site structure recommendations. Suggested solution: As the domain authority (DA) on the .com/.co.uk sites are in the 60+ it would risky moving domains/subdomain at this stage and would be a waste not to utilise the DAs that have built up on both sites. I have suggested they keep both sites and share the same content between them using a content curated WP plugin and using the 'canonical link' to reference the original source (US or UK) - so not to get duplicate content issues. My question: Let's say I'm a potential customer in the UK and i'm searching using a keyword phrase that the content that answers my query is on both the UK and US site although the US content is the original source.
Intermediate & Advanced SEO | | JonRayner
Will the US or UK version blog appear in UK SERPs? My gut is the UK blog will as Google will try and serve me the most appropriate version of the content and as I'm in the UK it will be this version, even though I have identified the US source using the canonical link?2 -
Large number of links form Pinterest
Could unusually large number of links from Pinterest cause issues? Would Google categorise them as spammy links or site wide links? I have a small site with Urls around 800-1000. But webmaster shows 5321 links from Pinterest.com and 1467 from Pinterest.se. Please see attachment. ffNLF
Intermediate & Advanced SEO | | riyaaaz0 -
Strange 404s in GWT - "Linked From" pages that never existed
I’m having an issue with Google Webmaster Tools saying there are 404 errors on my site. When I look into my “Not Found” errors I see URLs like this one: Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ When I click on that and go to the “Linked From” tab, GWT says the page is being linked from http://www.myrtlebeach.com/Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ The problem here is that page has never existed on myrtlebeach.com, making it impossible for anything to be “linked from” that page. Many more strange URLs like this one are also showing as 404 errors. All of these contain “subcatsubc” somewhere in the URL. My Question: If that page has never existed on myrtlebeach.com, how is it possible to be linking to itself and causing a 404?
Intermediate & Advanced SEO | | Fuel0 -
To follow or nofollow paid internal links?
I am having an internal debate on the need to use nofollow tags on sponsored internal links that link to internal pages. One thought is based on this Matt Cutts video (Should internal links use rel="nofollow"?) in which he says that there is never a need to use a nofollow tag on an internal link. The other school of thought is that paid links with follow tags are a violation of Google policy and it does not matter if they link internally or externally. Matt was just not thinking of this scenario in his short video. Would love to hear if anyone has had any manual action from Google based on their internal links.
Intermediate & Advanced SEO | | irvingw0 -
Posing QU's on Google Variables "aclk", "gclid" "cd", "/aclk" "/search", "/url" etc
I've been doing a bit of stats research prompted by read the recent ranking blog http://www.seomoz.org/blog/gettings-rankings-into-ga-using-custom-variables There are a few things that have come up in my research that I'd like to clear up. The below analysis has been done on my "conversions". 1/. What does "/aclk" mean in the Referrer URL? I have noticed a strong correlation between this and "gclid" in the landing page variable. Does it mean "ad click" ?? Although they seem to "closely" correlate they don't exactly, so when I have /aclk in the referrer Url MOSTLY I have gclid in the landing page URL. BUT not always, and the same applies vice versa. It's pretty vital that I know what is the best way to monitor adwords PPC, so what is the best variable to go on? - Currently I am using "gclid", but I have about 25% extra referral URL's with /aclk in that dont have "gclid" in - so am I underestimating my number of PPC conversions? 2/. The use of the variable "cd" is great, but it is not always present. I have noticed that 99% of my google "Referrer URL's" either start with:
Intermediate & Advanced SEO | | James77
/aclk - No cd value
/search - No cd value
/url - Always contains the cd variable. What do I make of this?? Thanks for the help in advance!0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0 -
Can obfuscated Javascript be used for too many links on a page?
Hi mozzers Just looking for opinions/answers on if it is ever appropriate to use obfuscated Javascript on links when a page has many links but they need to be there for usability? It seems grey/black hat to me as it shows users something different to Google (alarm bells are sounding already!) BUT if the page has many links it's losing juice which could be saved....... Any thoughts appreciated, thanks.
Intermediate & Advanced SEO | | TrevorJones0 -
ECommerce products duplicate content issues - is rel="canonical" the answer?
Howdy, I work on a fairly large eCommerce site, shop.confetti.co.uk. Our CMS doesn't allow us to have 1 product with multiple colour and size options so we created individual product pages for each product variation. This of course means that we have duplicate content issues. The layout of the shop works like this; there is a product group page (here is our disposable camera group) and individual product pages are below. We also use a Google shopping feed. I'm sure we're being penalised as so many of the products on our site are duplicated so, my question is this - is rel="canonical" the best way to stop being penalised and how can I implement it? If not, are there any better suggestions? Also, we have targeted some long-tail keywords in some of the product descriptions so will using rel-canonical effect this or the Google shopping feed? I'd love to hear experiences from people who have been through similar things and what the outcome was in terms of ranking/ROI. Thanks in advance.
Intermediate & Advanced SEO | | Confetti_Wedding0