Nah. Ditch em. Article directory links are pretty garbage these days. Any miniscule benefit they may still provide isnt worth the headache of wondering if those are the links keeping ya down. Also if you ever have to do a reconsideration request it'll be one less thing for you to clean up because under a manual review there isnt google engineer out there who isnt going to frown on them links.
- Home
- KrisRoadruck
KrisRoadruck
@KrisRoadruck
Job Title: Digital Marketing Consultant
Favorite Thing about SEO
The highscore board (SERPs)
Latest posts made by KrisRoadruck
-
RE: Will cleaning up old pr articles help serps?
-
RE: Number of occurances of a keyword
Keyword density is irrelevant unless you are just spamming the crap out of the page with it. Assuming you are writing naturally I'd have to guess its a really long page to get your partial keyword in there 60 times. If you aren't writing naturally, fix it of course. If however its not a big block of text we are talking about here but rather an ecommerce page with a ton of product listings that just happen to contain the word (for example its a page listing different types of boots and thusly the word boot appears a bunch of times) I think google is smart enough to figure out whats going on there and not ding you for it.
Hope that helps.
-
RE: Will cleaning up old pr articles help serps?
Article directories have been looked down on by google for quite some time now (basically since panda, long before penguin). Couple that with them coming down on overly aggressive money anchor linking and the issue you have with semi-duped content there (Im guessing you spun them so you could take the same article and shop it to all 10 directories each time you did this) and yeah thats a bad recipe. I'd definitely delete the articles if you still have access to all the accounts. 50 unique * 10 directories Im guessing 500 crap links in total? Ditch em and then work on getting some really awesome replacements (you wont need nearly as many). High quality guest blogging is the new article submission. See if you can't get some guest author accounts at a few tightly related sites from your industry or aim high and see about getting guest author accounts on general purpose but super authoritative sites (few examples that allow this, national geographic, cracked.com, washington times, guardian.co.uk, investopedia, you get the idea here...) dumping those old links and adding in just even 20-30 great ones is going to be a huge lift for you.
-
RE: Advisable to pay for a link from a highly reputed site in same domain?
One thing to keep in mind when doling out this kind of advice is that time isnt free. If the cost to do a one off payment on a link is far less than the hourly cost * the number of hours it takes to develop great content, make a new friend and of course the fail rate on that sort of outreach (which is super duper high if you've spent any time doing it) then saying it'll save money is kind of actually not true at all. Not advocating buying links perse but yeah time isnt free.
-
RE: Advice on buying a domain name for a valuable link
I do this pretty regularly so hopefully this will be of help. If you decide to go this route there is a ton of diligence that needs to be done. First keep in mind that post-penguin there are a ton of people dumping essentially burned domains. When you are digging through your drop lists first start off by making sure the domain is still indexed by google at all. Next take your list and dump it in a google doc and use the moz api (tutorial here: http://www.seomoz.org/ugc/updated-tool-seomoz-api-data-for-google-docs) to pull up the following stats on your list:
Page Authority
Domain Authority
Domain Trust
Root Linking Domains to Root Domainpage authority is really not overly important at this stage, you are just using it to figure out if they were WWW or non-WWW for when you do your site rebuild (more on that later)
For the other ones dont touch anything with a DA below 40, a DT below 4 and RLDTDs below say 150.
once you have your remaining list (it'll be a lot smaller than the one you started with) you now have to basically check SEOmoz's work. A lot of times a domain that has been spammed all to hell will still have semi-decent numbers with moz. They just aren't that good at determining spammy links just yet. Look for really high money term anchor counts (bad) really high link to root linking domain accounts (means lots of sitewides, also bad) and things of that nature.
Once you've chucked out all the domains that are really ugly take your now very very small list and go to archive.org. You are looking to see how long the domain existed in its most recently current state. Does it look like it changed hands a bunch? Or has it been pretty much the same for years. Does archive.org have most of its site archived? You'll need that for site reconstruction. As others have mentioned you basically want to restore it to its previous state as closely as you can. At least to start out with. If all that checks out then you'll want to look at the whois info. You are going to want to register it with the same registrar it was last time. Match their whois data as best as you can. Make it look like they just forgot to renew for a while but then fixed it. Once the whole site (or as much of it as you can) is restored, just let that sucker camp for a few weeks. Let google get used to it being back. Make sure they don't pull a pagerank reset on it for the drop (if it had some and it suddenly drops to zero, you might as well toss the thing out. It means google knows it changed hands). If all looks good after a few weeks? Slip your link in wherever it makes sense. A new page linked to from the home page would be ideal. Adding a new link to an old content page is in most cases a pretty glaring sign of link manipulation.
Hope that helps
P.S. in the version of the google doc included in that tutorial it doesnt have DT or RLDTRDs build in. You'll need to mod the code slightly if you want to pull those stats. Here is the bit you need to change
add this:
"ptrp" : "domain trust" , // 524288
"pid" : "root domain links" // 8192under this row:
"upa" : "page authority" , // 34359738368
and then change this row from this:
var SEOMOZ_ALL_METRICS = 103616137253; // All the free metrics
to this
var SEOMOZ_ALL_METRICS = 103616669733; // All the free metrics
now in the main sheet just add 2 columns. One with the heading domain trust, the other with root domain links and be sure to change the yellow box formula to include the extra 2 columns during its fetch. Cheers
-
RE: How important are internal pages to overall site rank?
Important to have a hero page (or 2) otherwise you risk keyword cannibalization. The short and dirty way to handle this of course is to make sure that any page that is basically on topic with your hero page but isn't the hero page links back to the hero page with the appropriate anchor somewhere in the editorial body. There are other more elegant ways to handle this but this way will get you into less trouble than, for example, rel-canoning all those topically relevant non-hero pages to the hero page (bad idea dont do it).
-
RE: How do you track new inbound links?
Not sure on OSE but in both AHREFS and Majestic there is a "first discovered" date on links. Perhaps this should go in the feature request bucket for the OSE team?
-
RE: What Are the Best Practices for Ranking for Synonyms?
The best bet would be to use the words semi-interchangeably on that page (don't over do it) and then just grab some links with anchor texts of the various synonyms. Single page is going to be the stronger bet.
-
RE: Why has SEOmoz added G+ code to multiple pages?
I doubt a single string is going to add too much overhead. Im not sure how they have designed their CMS but it may just be a case of having it everywhere is easier to implement than page specific. The other thing may just be making sure google has an easy time associating sub-page content with the author/publisher without having to look up the main page. Im simply guessing here though, Im sure a mozzer will come along and give you a more definitive answer once they see your question.
-
RE: Why has SEOmoz added G+ code to multiple pages?
can't speak to the moz staff but meta tags aren't treated as links. neither are rel tags within an <a>tag. So no risk to "leaking link juice from ever page". :-)</a>
Best posts made by KrisRoadruck
-
RE: What's your best hidden SEO secret?
Oh man this is me to a T. Its hard to explain to others that the rest of the time spent farting around online is really a primer for this. If I just tried to come in for an hour and leave It would never be an in the zone hour.
-
RE: Advice on buying a domain name for a valuable link
I do this pretty regularly so hopefully this will be of help. If you decide to go this route there is a ton of diligence that needs to be done. First keep in mind that post-penguin there are a ton of people dumping essentially burned domains. When you are digging through your drop lists first start off by making sure the domain is still indexed by google at all. Next take your list and dump it in a google doc and use the moz api (tutorial here: http://www.seomoz.org/ugc/updated-tool-seomoz-api-data-for-google-docs) to pull up the following stats on your list:
Page Authority
Domain Authority
Domain Trust
Root Linking Domains to Root Domainpage authority is really not overly important at this stage, you are just using it to figure out if they were WWW or non-WWW for when you do your site rebuild (more on that later)
For the other ones dont touch anything with a DA below 40, a DT below 4 and RLDTDs below say 150.
once you have your remaining list (it'll be a lot smaller than the one you started with) you now have to basically check SEOmoz's work. A lot of times a domain that has been spammed all to hell will still have semi-decent numbers with moz. They just aren't that good at determining spammy links just yet. Look for really high money term anchor counts (bad) really high link to root linking domain accounts (means lots of sitewides, also bad) and things of that nature.
Once you've chucked out all the domains that are really ugly take your now very very small list and go to archive.org. You are looking to see how long the domain existed in its most recently current state. Does it look like it changed hands a bunch? Or has it been pretty much the same for years. Does archive.org have most of its site archived? You'll need that for site reconstruction. As others have mentioned you basically want to restore it to its previous state as closely as you can. At least to start out with. If all that checks out then you'll want to look at the whois info. You are going to want to register it with the same registrar it was last time. Match their whois data as best as you can. Make it look like they just forgot to renew for a while but then fixed it. Once the whole site (or as much of it as you can) is restored, just let that sucker camp for a few weeks. Let google get used to it being back. Make sure they don't pull a pagerank reset on it for the drop (if it had some and it suddenly drops to zero, you might as well toss the thing out. It means google knows it changed hands). If all looks good after a few weeks? Slip your link in wherever it makes sense. A new page linked to from the home page would be ideal. Adding a new link to an old content page is in most cases a pretty glaring sign of link manipulation.
Hope that helps
P.S. in the version of the google doc included in that tutorial it doesnt have DT or RLDTRDs build in. You'll need to mod the code slightly if you want to pull those stats. Here is the bit you need to change
add this:
"ptrp" : "domain trust" , // 524288
"pid" : "root domain links" // 8192under this row:
"upa" : "page authority" , // 34359738368
and then change this row from this:
var SEOMOZ_ALL_METRICS = 103616137253; // All the free metrics
to this
var SEOMOZ_ALL_METRICS = 103616669733; // All the free metrics
now in the main sheet just add 2 columns. One with the heading domain trust, the other with root domain links and be sure to change the yellow box formula to include the extra 2 columns during its fetch. Cheers
-
RE: Follow or Nofollow outbound links to resourceful, related site?
Most of the time.. if its worth linking out to, its worth leaving followed. No-follow really doesnt help you save any juice these days it just tells google you don't trust the site you are linking out to. I would say if you dont trust a site you are linking out to... maybe just don't link to it at all (unless its for a blog post were you are poking fun at the site in question and the link is for reference purposes)
-
RE: Link Building: Feel like ive hit a wall
The very first thing that strikes me here in this is you say you have about 12,000 links from only 48 domains.
Site age may be helping them but I'd be willing to bet that they probably have a more ip-diverse link profile than you do as well. Sounds like you have the niche relevant links down so now its time to get outside that box and start going after other metrics. Here are the big 6 I look at:
Trust/Authority
Niche Relevance (you have this covered)
Segmentation (where the links appear on the page)
Anchor
IP diversity (links from many domains > many links from few domains) <- start here
Link-weight (link juice, pagerank, whatever you want to call it)I'd be willing to bet your 80 bucks would be better spent on something like a pauls or angelas backlink packet than it would be on renting one or 2 links a month. And a lower risk too.
-
RE: Why does google not show my ecommerce category page when I have the same keywords for many products in the product title?
A great way to get around this would be to apply a different title within the category page linking to the product page than the one displayed after hitting the product page. People at the category level likely know they are in the mens shirt section so doing brand + color + style (long, short, sweater et cetera) and sizing options at that level for the title and then when the user gets to the actual product page having your H1 and title tags reflecting the full string including "mens shirt" may be an ideal way to do this. Obviously this may take some adjustments in your CMS but if what you are seeing seems definitive enough this is a great middle ground and likely worth the coding effort to add an extra entry field for "category page product title" You may even be able to automate it by simply having your system drop the category text from product titles within a category page.
-
RE: Outsourcing development to external agencies
Hmm. Well there is a LOT of ground to cover here but I would say the short version is it sounds like its time to rebuild your website. Depending on the size and scope of the project my firm may be willing to take it on pro-bono. We actively give to charity every year and though that usually means monetary donations I would absolutely be interested in donating something that we do best!
Now to the core of the question, yes having everything on a single domain is usually better. There are situations were subdomains are appropriate but without more information I'd say this is likely not the case for you. I also think that having your subdomains hosted in different parts of the world is likely sending some really odd signals to google as to the connection between your subdomains and the main site. I would definately try to get the whole site hosted in one place.
I can't fathom a good reason to use iFrames in the senario you are discribing instead of just self-hosting the content in the correct location.
-
RE: Did Google's Farmer Update Positively/Negatively Affect Your Search Traffic?
Oddly the handful of sites that I have which should have most probably been affected negatively actually saw boosts in traffic, CTR on ads and eCPM on ads. Not huge jumps.. but yeah.. I benefited which was odd. These domains are testbeds I set up a long time ago to find the upper limit of what you can "get away with" in google so I know where to draw the line.
Other interesting facts. I rand some tests over the weekend (may not be large enough to be statistically relevant yet) but it seems the farmer update has almost no impact on indexation of poor or duplicate content given enough raw link juice (no anchor, ip diversity or any other cool factors, just a flat link from a big ol' bucket of link juice) which I find disappointing. =/ I expected a bit of a challenge after all this hoopla. Even though I rock the greyhat Im still pretty anti dupe/crap content.
-
RE: Will a Media Wiki guides section affect my sites SEO?
you may see some initial drop as google figures out where your content went and how its being repurposed (even with smart 301ing you are still chunking up the content)
HOWEVER, wiki-style internal linking has proven to be very effective! I mean... all those rich anchors, links in the content instead of the nav.. these are all nice signals.
Im fairly certain wiki comes outta the box with dofollow for internal links and nofollow for externals however you can change all of this stuff with settings and plugins.
All in all I'd say if you are willing to commit the time its going to be a net gain both for your rankings (over time) and more importantly for your users.
-
RE: The perfect work environment?
Why stop at 3 monitors when you can have 6! Pics attached are my actual computer at the office.
6x 27in LED monitors on articulating arms.
-
RE: Seo is dead?
I think the "SEO is Dead" statement comes from a fundamental lack of understanding of what SEO actually is. While we search marketers tend to clump a lot of extra marketing tasks into the SEO bucket, SEO in its simplest form means exactly what it says.. search engine optimization. Search engines will always exist in one form or another. Even using the "FIND" function in twitter is tapping into a search engine. As long as there are people searching for things and thus information retrevial, optimizing that will always be a necessity. People tend to confuse short term tactics with long term definitions. Just because article marketing or link building or digg may no longer be the way to optimize something for search, or even if google and bing cease being the preferred search portals it in no way means SEO will go away, just the strategies and tactics and applications will change.
Kris Roadruck stumbled in to the search marketing industry early 2009 leaving a career in network engineering behind for something new. Quickly he realized he had both a passion and a natural talent for promoting sites and building efficient processes to scale link building and content writing.
Kris is respected as a subject matter expert on Content Strategy, outsourcing and linkbuilding by many industry thought leaders. When he is not off managing his ever expanding link building army you can find him drinking vodka redbulls and shooting the shit at SEO/SEM conferences or working on his programming skills.
Looks like your connection to Moz was lost, please wait while we try to reconnect.