What's the best free tool for checking for broken links?
-
I'm trying to find the best tool to check for broken links on our site. We have over 11k pages and I'm looking for something fast and thorough! I've tried Xenu and LinkChecker. Any other ideas?
-
Hey Kristi
Xenu is great and free, free I tells ya! But, if you can shell out a bit of dosh, the screaming frog is a better tool.
SEOMoz's own Dr Pete did a bit of a write up here:
http://www.seomoz.org/blog/crawler-faceoff-xenu-vs-screaming-frog -
I agree. Here's a link: http://home.snafu.de/tilman/xenulink.html It looks a little shady but it actually does a decent job.
-
If you're looking for something with a lot more SEO-specific functionality, it's definitely worth trying IIS Search Engine Optimization Toolkit 1.0:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Worried About Broken Links
In Wordpress, I'm using a plugin called Broken Link Checker to check for broken links. Should I be worried about/spend time fixing outbound links that result in: 403 Forbidden -Server Not Found -Timeout -500 Internal Server Error -etc. Thanks for your help! Mike
Technical SEO | | naturalsociety3 -
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
Manual Action - When requesting links be removed, how important to Google is the address you're sending the requests from?
We're starting a campaign to get rid of a bunch of links, and then submitting a disavow report to Google, to get rid of a manual action. My SEO vendor said he needs an @email domain from the website in question @travelexinsurance.com, to send and receive emails from vendors. He said Google won't consider the correspondence to and from webmasters if sent from a domain that is not the one with the manual action penalty. Due to company/compliance rules, I can't allow a vendor not in our building to have an email address like that. I've seen other people mention they just used a GMAIL.com account. Or we could use a similar domain such as @travelexinsurancefyi.com. My question, how critical is it that the domain the correspondence with the webmasters be from the exact website domain?
Technical SEO | | Patrick_G0 -
Robots.txt crawling URL's we dont want it to
Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?
Technical SEO | | ShearingsGroup0 -
Just read Travis Loncar's YouMoz post and I have a question about Pagination
This was a brilliant post. I have a question about Pagination on sites that are opting to use Google Custom Search. Here is an example of a search results page from one of the sites I work on: http://www.ccisolutions.com/StoreFront/category/search-return?q=countryman I notice in the source code of sequential pages that the rel="next" and rel="prev" tags are not used. I also noticed that the URL does not change when clicking on the numbers for the subsequent pages of the search results. Also, the canonical tag of every subsequent page looks like this: Are you thinking what I'm thinking? All of our Google Custom Search pages have the same canonical tag....Something's telling me this just can't be good. Questions: 1. Is this creating a duplicate content issue? 2. If we need to include rel="prev" and rel="next" on Google Custom Search pages as well as make the canonical tag accurate, what is the best way to implement this? Given that searchers type in such a huge range of search terms, it seems that the canonical tags would have to be somehow dynamically generated. Or, (best case scenario!) am I completely over-thinking this and it just doesn't matter on dynamically driven search results pages? Thanks in advance for any comments, help, etc.
Technical SEO | | danatanseo1 -
I'm redesigning a website which will have a new URL format. What's the best way to redirect all the old URLs to the new ones? Is there an automated, fast way to do this?
For example, the new URL will be: https://oregonoptimalhealth.com/about_us.html while the old one's were like this: http://www.oregonoptimalhealth.com/home/ooh/smartlist_1/services.html I have redirect almost 100 old pages to the correct new page. What's the best and easiest way to do this?
Technical SEO | | PolarisMarketing0 -
If multiple links on a page point to the same URL, and one of them is no-followed, does that impact the one that isn't?
Page A has two links on it that both point to Page B. Link 1 isn't no-follow, but Link 2 is. Will Page A pass any juice to Page B?
Technical SEO | | Jay.Neely0