Suggestions on Link Auditing a 70,000 URL list?
-
I have a website with nearly 70,000 incoming links, since its a somewhat large site that has been online for 19 years.
The rate I was quoted for a link audit from a reputable SEO professional was $2 per, and clearly I don't have $140,000 to spend on a link audit !!
I was thinking of asking you guys for a tutorial that is the Gold Standard for link auditing checklists - and do it myself. But then I thought maybe its easier to shorten the list by knocking out all the "obviously good" links first. My only concern is that I be 100% certain they are good links.
Is there an "easiest approach" to take for shortening this list, so I can give it to a professional to handle the rest?
-
Hi! - I wrote this guide a few years ago on penalty recovery which may help you as it contains a lot of methods around auditing the links - https://moz.com/blog/ultimate-guide-to-google-penalty-removal
If we were to approach a product with 70k URLs. We'd do the following steps:
- Pull all the URLs into a Spreadsheet
- Split the URLs into domains
- Filter the URLs are search for common spammy words. e.g 'Link', 'Best', 'Free', 'Cheap', 'Dir', 'SEO' etc (mark as spam accordingly)
- Run contact finding across all URLs using a tool such as URL Profiler with Whois Lookups
- Filter by contact name and find duplicates (mark as spam accordingly)
- Filter by website type and mark as spam accordingly
- Manually check remaining links
By working through by domain, you'll rule out thousands of spammy links very quickly. Though 70k will ultimately take a few solid days of work.
Hope this helps,
Lewis
-
Have you looked at www.monitorbacklinks.com, good tool.
-
Hello,
Although it's important to do a link audit if you feel you have been penalized, for some sites a link audit isn't necessary. With that being said, and you feel you need a link audit there are a few options. Ideally, you would go through each link and review it to see how it may be impacting your site, but often site owners don't have the time to do this.
- Review obvious links - Grab 50-100 links at a time and do a quick glance at each one to determine if it should be on a list of potentially bad links. This way you can quickly overlook links you know are not hurting your rankings. Over time you can slowly tackle your list and hammer out which links are bad.
- Focus on spam analysis links - Run your site through Moz open site explorer and review the spam analysis. Now you're not going to get every single link here, but you can get an idea on what links are lower quality.
- Look into other companies - $2 per link is quite high, and there are other companies out there that will do a link audit, removal, and disavow for much less. If you would like a quote please contact us. Look into multiple options, don't get sold on just what one place tells you.
Hope this is helpful, if you have any additional questions please feel free to ask.
Chris
-
$2 per link is very expensive when you are looking at so many, especially as there is a big part of this that can be automated (hint: This should cost you no more than about $5-$10k if outsourced).
Linda has given you some good tips there, but I do agree that you need to tread carefully because you can often go too far and end up jumping out of the frying pan and into the fire.
It really does help to first gather all of the links from as many sources as you can and as already mentioned, create your de-dupe list. Depending on who you speak to at this point, there are different ways to go through the data and start to segment the links into those you know that are dangerous, those that are perhaps a bit of a grey area, and those that are safe.
Cheers,
Andy
-
I concentrate on the "most normal or typical sites will not need to use this tool" part, myself. (Though it sounds like you may not fall into that category.)
So then it's back to downloading as comprehensive a list of links as you can by using various sources and looking them over. (Also, in the past I have used LinkResearchTools to get an overview--it isn't cheap but it is a lot less than $140,000.)
-
Yes. We have confirmed with Sucuri that there was a concerted, intentional spam campaign against our site in 2013 that has since destroyed our rankings. Though Google hasn't given us any warnings, Sucuri had us on a blacklist because of it, and was kind enough to remove us without any cost or obligation on our part to sign up. They also provided us with a list of some of the most offending links so I could disavow them.
With up to 70,000 total, I am confident there are more, and to be honest, I see no reason to "leave some". Or leave any. I believe Google's warning should focus on this part: "...if used incorrectly". That means ... simply use it correctly. And disavow bad links, period. That's my take at least.
-
First, are you sure you need a link audit? Google is pretty good at ignoring regular spammy links that get picked up over time by large sites, as they say in their "Disavow backlinks" help page.
If you think there is a cause for concern, Moz's own Open Site Explorer can give you a list of incoming links that includes a spam score for those links, which can be used as a first pass.
The general drill for a manual link audit is to find all of the links you can (search console, moz, ahrefs, majestic, etc.) and create a de-duped list. From there, the "definitely good links" are usually easy to spot--you will recognize them from your industry or from other authoritative sources. And you will probably recognize the spammy "Get Rich/Viagra" backlinks as well. (If you sort your list by domain, it is easier to pick them out as a group.)
The rest are the ones to look at more closely.
But as I said to start, unless you think you are being penalized, tread lightly when it comes to disavowals.
To quote from Google [about disavowal]:
"This is an advanced feature and should only be used with caution. If used incorrectly, this feature can potentially harm your site’s performance in Google’s search results. We recommend that you disavow backlinks only if you believe you have a considerable number of spammy, artificial, or low-quality links pointing to your site, and if you are confident that the links are causing issues for you. In most cases, Google can assess which links to trust without additional guidance, so most normal or typical sites will not need to use this tool."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Links from a penalised site.
Hey Mozzers, Recently we have had a series of agencies in to pitch for work, one group mentioned that due to our association with a possibly penalised product review website, any links and activity associated with the brand would hinder our SEO. We currently have a good rating, but we are now no longer pushing our customers to the site as we move to a new platform. The current link back from this website is also no-followed. Any thoughts on how this could impact us? And how the agencies determined the site was penalised and causing us problems. Cheers Tim
Intermediate & Advanced SEO | | TimHolmes0 -
How Does Yelp Create URLs?
Hi all, How does Yelp (or other sites) go about creating URLs for just about every service and city possible ending with the search? in the URL like this https://www.yelp.com/search?cflt=chiropractors&find_loc=West+Palm+Beach%2C+FL. They clearly aren't creating all of these pages, so how do you go about setting a meta title/optimization formula that allows these pages to exist AND to be crawled by search engines and indexed?
Intermediate & Advanced SEO | | RickyShockley0 -
Linking to URLs With Hash (#) in Them
How does link juice flow when linking to URLs with the hash tag in them? If I link to this page, which generates a pop-over on my homepage that gives info about my special offer, where will the link juice go to? homepage.com/#specialoffer Will the link juice go to the homepage? Will it go nowhere? Will it go to the hash URL above? I'd like to publish an annual/evergreen sort of offer that will generate lots of links. And instead of driving those links to homepage.com/offer, I was hoping to get that link juice to flow to the homepage, or maybe even a product page, instead. And just updating the pop over information each year as the offer changes. I've seen competitors do it this way but wanted to see what the community here things in terms of linking to URLs with the hash tag in them. Can also be a use case for using hash tags in URLs for tracking purposes maybe?
Intermediate & Advanced SEO | | MiguelSalcido0 -
Question about duplicate listings on site for product listings.
We list products on our site and suspect that we have been hit by Panda as we are duplicating listings across our own site. Not intentionally, we just have multiple pages listings the same content as they fall into multiple categories. Has anyone else had the same issue and if so how did you deal with it?.. Have you seen a change in results/rankings due to the changes you made?
Intermediate & Advanced SEO | | nick-name1230 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Rewriting URL
I'm doing a major URL rewriting on our site to make the URL more SEO friendly as well as more comfortable and intuitive for our users. Our site has a lot of indexed pages, over 250k. So it will take Google a while to reindex everything. I was thinking that when Google Bot encounters the new URLs, it will probably figure out it's duplicate content with the old URL. At least until it recrawls the old URL and get a 301 directing them to the new URL. This will probably lower the ranking of every page being crawled. Am I right to assume this is what will happen? Or is it fine as long as the old URLs get 301 redirect? If it is indeed a problem, what's the best solution? rel="canonical" on every single page maybe? Another approach? Thank you.
Intermediate & Advanced SEO | | corwin0 -
Excessive navigation links
I'm working on the code for a collaborative project that will eventually have hundreds of pages. The editor of this project wants all pages to be listed in the main navigation at the top of the site. There are four main dropdown (suckerfish-style) menus and these have nested sub- and sub-sub-menus. Putting aside the UI issues this creates, I'm concerned about how Google will find our content on the page. Right now, we now have over 120 links above the main content of the page and have plans to add more as time goes on (as new pages are created). Perhaps of note, these navigation elements are within an html5 <nav>element: <nav id="access" role="navigation"> Do you think that Google is savvy enough to overlook the "abundant" navigation links and focus on the content of the page below? Will the <nav>element help us get away with this navigation strategy? Or should I reel some of these navigation pages into categories? As you might surmise the site has a fairly flat structure, hence the lack of category pages.</nav> </nav> </nav>
Intermediate & Advanced SEO | | boxcarpress1 -
Dynamic URLs Appearing on Google Page 1\. Convert to Static URLs or not?
Hi, I have a client who uses dynamic URLs thoughout his site. For SEO purposes, I've advised him to convert dynamic URLs to static URLs whenever possible. However, the client has a few dynamic URLs that are appearing on Google Page 1 for strategically valuable keywords. For these URLs, is it still worth it to 301 them to static URLs? In this case, what are the potential benefits and/or pitfalls?
Intermediate & Advanced SEO | | mindflash0