How to Link a Network of Sites w/o Penguin Penalties (header links)
-
I work for a network of sites that offer up country exclusive content. The content for the US will be different than Canada, Australia, Uk, etc.… but with the same subjects.
Now to make navigation easy we have included in the header of every page a drop down that has links to the other countries, like what most of you do with facebook/twitter buttons. Now every page on every site has the same link, with the same anchor text.
Example:
Because every page of every site has the same links (it's in the header) the "links containing this anchor text" ratio is through the roof in Open Site Explorer. Do you think this would be a reason for penguin penalization?
If you think this would hurt you, what would you suggest? no follow links? Remove the links entirely and create a single page of links? other suggestions?
-
If 50% of your links are [Exact] keywords, there is a good shot of the penguin.
I would recommend going back and changing the links from the [Exact] terms to the brand name.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do the header tags must be placed from top to bottom order?
Generally tags will be placed starting with h1, then h2, h3 and h4.... Some of our pages starts with h3 and h1 is placed after couple of h2 and h3 tags. Is this a bad placement which hurts in SEO?
Algorithm Updates | | vtmoz1 -
How to determine the best keyword strategy/purpose for a blog in 2014?
Currently our blog has been used to add content to our site targeting desired keywords (fairly top-level). For example, if we wanted organic traffic for "Some City Contractors" (by no means a longtail), we would write a blog using this key term in the Title, url, a sub heading perhaps and a couple variations of the term throughout any subheadings or body copy. I think the idea was that since there was so much work to be done to get the static site pages optimized (rewriting that copy), we just decided to crank out fresh content targeting these high level KWs, assuming a search engine result is a result and as long as we got real estate there, a click and there was a link to the relevant site page in that article, we were golden (well, maybe not golden, but good). We are now building a new, responsive site and taking care to make sure that the site's relevant pages are nicely optimized. Higher level page are optimized for high-level KWs and sub pages target longer tail KWs identified in KW research. Along the way an SEO said it was bad that so many of our blogs were better optimized for key terms than the actual site pages (i.e. service pages, things you would find in the main nav.) This does make some sense to me so... So what is the new purpose for our blogs in this new age of Google and ever-increasing social influence? Should we forget about focusing on KWs already addressed within the site's core? Focus more on interesting, super long-tails that maybe don't have a ton of traffic, but are relevant (and oh by they way, something like 3 million terms are searched for the first time each day, right?)? Or forget the keywords, as long as the topic is relevant and interesting the real pay-off is in social interactions. I'm really interested to see if this results in clear-cut answer or more of a lengthy discussion...
Algorithm Updates | | vernonmack1 -
After penguin 2.0, 20-25% drop sitewide, no google unatural links message, What could be causing it?
Hi,Since Penguin 2.0 we've taken a 20-25% knock but not recieved an unatural link message from Google. After sending a bunch of removal requests, I decided to submit a disavow file anyway two weeks ago and tried to make sure I rooted out some links that were built way back when our site started and link building best practice was a bit shadier. Analysis of our backlink profile points to about 40-50% links coming from general directories, wondering if perhaps their weight has been adjusted and this is why the drop occured? Having said that we have some high quality links from government sources and highly trusted sites so not too spammy. Can anyone shed some light or offer suggestions? Thanx
Algorithm Updates | | Mulith0 -
Content Caching Memory & Removal of 301 Redirect for Relieving Links Penalty
Hi, A client site has had very poor link legacy, stretching for over 5 years. I started the campaign a year ago, providing valuable good quality links. Link removals and creating a disavow to Google have been done, however after months and months of waiting nothing has happened. If anything, after the recent penguin update, results have been further affected. A 301 redirect was undertaken last year, consequently associating those bad links with the new site structure. I have since removed the 301 redirect in an attempt to detach this legacy, however with little success. I have read up on this and not many people appear to agree whether this will work. Therefore, my new decision is to start a fresh using a new domain, switching from the .com to .co.uk version, helping remove all legacy and all association with the spam ridden .com. However, my main concern with this is whether Google will forever cach content from the spammy .com and remember it, because the content on the new .co.uk site will be exactly the same (content of great quality, receiving hundreds of visitors each month from the blog section along) The problem is definitely link related and NOT content as I imagine people may first query. This could then cause duplicate content, knowing that this content pre-existed on another domain - I will implement a robots.txt file removing all of the .com site , as well as a no index no follow - and I understand you can present a site removal to Google within webmaster tools to help fast track the deindexation of the spammy .com - then once it has been deindexed, the new .co.uk site will go live with the exact same content. So my question is whether Google will then completely forget that this content has ever existed, allowing me to use exactly the same content on the new .co.uk domain without the threat of a duplicate content issue? Also, any insights or experience in the removal of a 301 redirect, detaching legacy and its success would also be very helpful! Thank you, Denver
Algorithm Updates | | ProdoDigital0 -
Do scraped or borrowed articles with my links still pass page rank?
I wrote some articles for Ezine Articles a few years back and i still see links in the ose to my site that are from these articles that were borrowed from the Ezine Articles bank. Do the links in these articles still count toward my site including link juice and anchor text or does google discount them as duplicate content? I was told that Google counts these links for about 3 weeks and then discounts them as duplicate content so it's like they don't exist. Any truth to this or should i make the articles on my site available for people to copy and paste into their blogs as long as they keep my links intact? Thanks, Ron
Algorithm Updates | | Ron100 -
One SERP Result, Two Different Link Destinations?
Because my vocabulary isn't up to par, it may be easier for you to skip ahead to the image I've attached. One of my web pages shows up in the Google SERP like this. It has the blue "title" link that goes to one page (URL A), and under that, there is a green "breadcrumb" link that goes to a different page (URL B). Any idea why this is happening and how it can be fixed? Thanks in advance, Benjamin FjhUX.jpg
Algorithm Updates | | 1000Bulbs0 -
Yahoo/Bing cache date went back in time
Within 12 hours of submitting a new site to Yahoo/Bing webmasters it was ranking #3 for the primary homepage search term and in the top 5 for about a dozen other. On 7/23 the rankings were steady or climbing with the most recent cache date of 7/21. Now the site only comes up when searching for the domain name with a cache date of 7/11. I launched the site about 14 days ago so I am not expecting results yet but I had never seen this happen so I am just curious if anyone else had.
Algorithm Updates | | jafabel0 -
Which is better for SEO. 1 big site or a number of smaller sites.
Hello , I am about to create a website with product reviews for a certain niche. What i want to know: Is it better for me to have a site with all reviews , like nicheproductsreviews.com and then have nicheproductsreviews.com/product-one-review.html and nicheproductsreviews.com/product-two-review.html or buy multiple domains to have product name in the domain name, like product-one-review.com and product-two-review.com As far as I understand, first approach consolidates all pages on the same site , consolidating all the link juice to it. However, second approach lets me have the product name in the main domain URL. Which way is better for SEO and why?
Algorithm Updates | | voitenkos0