How do I know if my SEO person is creating solid links vs spammy links?
-
Please see question
-
Some good suggestions above, try some back link checking tools, check their Domain Authority, etc. However, in my opinion, the best way for you to ensure your SEO person is building good links is to learn the basic difference between a good and bad link and actually check them yourself (the bigger your site and the more links you build, the less feasible this is, but the concept that you should be able to look at the links being built and understand what is a good or a bad link is still applicable). Obviously if you are building massive numbers of links, this is difficult (although there are tools that can help), but if your SEO employee (I assume it is singular) is building good links, they shouldn't be building massive numbers of them unless they are coming organically (through creating content or a product that is so popular that high quality links are appearing without traditional link building). Also, how are you measuring success? Ranking growth? Number of links? Quality of links? If you ask your SEO person to report on the links being built and ask he/she to include measures like Domain Authority, Page Authority, etc and then just try and audit the links periodically, you'll start to learn enough about SEO to measure their performance yourself (seriously, try Googling "audit my back links," there's some great tools out there, as well as reasonably simple explanations of the major things to look out for.
I also agree with those mentioning that outsourcing SEO is a dangerous (if somewhat necessary) strategy. In my opinion, learning about SEO basics is one of the single most valuable things a small business owner can do, since it will both improve your ability to market online, as well as protect you against hiring a bad employee.
-
SEO is too important for the small business owner to outsource it to anyone. Learn to do SEO yourself and you won't have to worry about all these shady practitioners.
-
I've never used LinkDetox, like trung.ngo mentions below but if they have a free version where you can just see if your backlink profile looks spammy to them at least you'd have one opinion on the matter. How many links are you looking to have reviewed?
-
You can hire someone, but you need to trust that they'll do a good job reviewing.
Have you asked your current SEO for a list of links that have been built?
-
You can check out http://www.linkdetox.com/. It's a link auditing tool that will at least at a high level provide some information about whether or not there are spammy links pointing to your site. I'd recommend reviewing the "toxic" links that they report back on manually though to determine if they're actually spammy links or not.
-
Is there a third party that can review the links for me?
-
This all depends on your purpose for SEO. Are you trying to rank well or are you trying to draw referral traffic through these links? Personally, I would shoot for the latter. Once you have your purpose down you should be able to work with your SEO and have them be totally transparent with you about the links they are building for you. If they aren't transparent with you or they give you excuses as to why they can't show you the links they have built that should be a potential red flag for you.
As for determining whether a link is quality or not, that really depends on who's eye is on it. I like to take a look at the websites that I have links on and determine if the site is real first off, then I ask myself if this is the type of site that people I care about are on. That's not to say that I don't have a few links on random sites that aren't necessarily spammy, but aren't really that quality either. What really matters is that you have a variation of links to your site.
It's ok to have a bunch of semi-quality links to your site, just make sure that you have more quality links that actually generate traffic and eyeballs. These are the links that are going to get you visitors and get you bumped up in the rankings. Just have a healthy diet of various links across the web. I hope this helps.
-
The first question I'd ask is where are you getting links from? If the sites are not relevant to your business or the article/page in which the link exists is not relevant to your business, I would say it's time to reevaluate your relationship with said consultant. I would also ask the SEO if they're requesting specific anchor text or not? I'd opt for no specific anchor text requests to keep the links more editorial in nature--having too much specific anchor text can get you in trouble with algorithm filters like Penguin.
Hope that helps you get started in evaluating your links!
-Trung
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Top hierarchy pages vs footer links vs header links
Hi All, We want to change some of the linking structure on our website. I think we are repeating some non-important pages at footer menu. So I want to move them as second hierarchy level pages and bring some important pages at footer menu. But I have confusion which pages will get more influence: Top menu or bottom menu or normal pages? What is the best place to link non-important pages; so the link juice will not get diluted by passing through these. And what is the right place for "keyword-pages" which must influence our rankings for such keywords? Again one thing to notice here is we cannot highlight pages which are created in keyword perspective in top menu. Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Viewing search results for 'We possibly have internal links that link to 404 pages. What is the most efficient way to check our sites internal links?
We possibly have internal links on our site that point to 404 pages as well as links that point to old pages. I need to tidy this up as efficiently as possible and would like some advice on the best way to go about this.
Intermediate & Advanced SEO | | andyheath0 -
SEO Monthly Strategy
Out of curiosity, do any Mozzers use a monthly spreadsheet style SEO strategy that is set on a daily basis like this: Day 1 - purchase/write 3 articles
Intermediate & Advanced SEO | | fertilefrog
Day 2 - comment on 5 blogs
Day 3 - upload article 1
Day 4 - directory submissions
Day 5 - blog promotion
Day 6 - etc..... If so, do you find this to be the most effective way of working, with this rigid structure?0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Do I have any harmful links? If so, what should I do?
URL in question: www.nasserilegal.com/criminal.html I'm using OSE and see some questionable backlinks. At first glance, if you look at the page authority and domain authority, they look great. Once you go to the actual pages, they look spammy. If the links are hurting the rankings for the site, should I try to remove the links manually or just ignore and continue to build good quality links or even build a new site? I noticed for the last couple of weeks, the rankings started to slip. Thanks in Advance, Lucas
Intermediate & Advanced SEO | | micasalucasa0 -
Does Google WMT download links button give me all the links they count
Hi Different people are telling me different things I think if I download "all links" using the button in WMT to excel, I am seeing all the links Google is 'counting' when evaluating my site. is that right?
Intermediate & Advanced SEO | | usedcarexpert0 -
Link Juice Vs. Page Rank
What is better from an SEO point of view a Page with Page Rank of 5 with 0 clicks linking to your site or a page with a Page Rank of 3 with 1000 clicks linking back to your site? Is link juice important? do search engines count Link Juice?
Intermediate & Advanced SEO | | SEODinosaur0 -
Effects on SEO with CDN
Should we be concerned about any adverse consequences to our site's SEO value when moving the site's assets (javascript files and css files) to a CDN (Akamai)?
Intermediate & Advanced SEO | | Volusion.com0