Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to find and fix 404 and broken links?
-
Hi, My campaign is showing me many 404 problems and other tools are also showing me broken links, but the links they show me dose work and I cant seem to find the broken links or the cause of the 404.
Can you help?
-
Hi Yoseph,
If you liked the broken link checker Chrome plugin then you could check out another Chrome plugin that the company I work for created. It's called Redirect Path and I use it all the time. It's a handy header and redirect checker that flags up any 301, 302, 404 & 500 errors on any page you visit.
Hope that helps!
-
Only a plus to Adam's response: remember to design custom 404 pages. It's a good idea to include links to pages you want to rank, a search box or a list of featured pages (post, categories, etc.) At least, it may help to keep users staying in your website.
Hope it helps.
Sergio.
-
Adam, any more good links and tools that you can share with me? I see you know "something" about SEO and web-building...
-
No problem. Happy to help.
-
Thanks Adam, Great help!!!!
You made my life much easier!
-
Hi Yoseph,
To find broken links I like to use the Check My Links plugin for Chrome.
As for the 404 errors. I believe there is a way to view which pages are linking to them using the SEOMoz tools but I prefer to just use the Screaming Frog spider. Once you have crawled the site with this tool, all you need to do is select the response codes tab at the top, filter by Client Error (4xx) then click on the links and select the In Links tab at the bottom to see the linking page.
Hope that helps,
Adam.
-
Thanks for your time.
I just cant find any broken links... I don't understand what you answered
-
I think you need to post the URL! Have you tried the links from every page? Although personally I would use an include file to reduce errors and maintenance time, but not sure if you are...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Broken canonical link errors
Hello, Several tools I'm using are returning errors due to "broken canonical links". However, I'm not too sure why is that. Eg.
Technical SEO | | GhillC
Page URL: domain.com/page.html?xxxx
Canonical link URL: domain.com/page.html
Returns an error. Any idea why? Am I doing it wrong? Thanks,
G1 -
How to find orphan pages
Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!
Technical SEO | | KJH-HAC1 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Should we nofollow footer social links?
Like most sites today we have a whole raft of social links in our footer, these are on every page of the site and link out to Facebook, Twitter, YouTube, etc Should these links be nofollow to avoid juice leaving our site or would you recommend allowing them to be followed to increase the power of these social sites? Is there a definitive Yay or Na on these social links?
Technical SEO | | Twist3600 -
What is link Schemes?
Hello Friends, Today I am reading about link schemes on http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66356 there are a several ways how to avoid Google penalties and also talk about the low quality links. But I can't understand about "Low-quality directory or bookmark site links" Is there he talked about low page rank, Alexa or something else?
Technical SEO | | KLLC0 -
Mass 404 Checker?
Hi all, I'm currently looking after a collection of old newspaper sites that have had various developments during their time. The problem is there are so many 404 pages all over the place and the sites are bleeding link juice everywhere so I'm looking for a tool where I can check a lot of URLs at once. For example from an OSE report I have done a random sampling of the target URLs and some of them 404 (eek!) but there are too many to check manually to know which ones are still live and which ones have 404'd or are redirecting. Is there a tool anyone uses for this or a way one of the SEOMoz tools can do this? Also I've asked a few people personally how to check this and they've suggested Xenu, Xenu won't work as it only checks current site navigation. Thanks in advance!
Technical SEO | | thisisOllie0 -
Find broken links in Excel?
Hello, I have a large list of URL's in an excel sheet and I am looking for a way to check them for 404 errors. Please help! Adam
Technical SEO | | digitalops0 -
Is link cloaking bad?
I have a couple of affiliate gaming sites and have been cloaking the links, the reason I do this is to stop have so many external links on my sites. In the robot.txt I tell the bots not to index my cloaked links. Is this bad, or doesnt it really matter? Thanks for your help.
Technical SEO | | jwdesign0