Find all external 404 errors/links?
-
Hi All,
We have recently discovered a site was linking to our site but it was linking to an incorrect url, resulting in a 404 error. We had only found this by pure chance and wondered if there was a tool out there that will tell us when a site is linking to an incorrect url on our site?
Thanks
-
If you dont have access to the logs that could be an issue - not really any automated tools out there as it would need to crawl every website and find 404 errors.
I haven't tried this - so its just an idea. Go into GSC download all the links pointing to your site (and from places like Moz, Ahrefs, Majestic) and then chuck that list of urls into Screaming Frog or URL Profiler and look at external links and see if any are returning a 404. Not sure if this would work - its just an idea.
Thanks
Andy
-
Great, will take a look. Maybe run a trial to see if it does exactly what I need
Thanks for the info!
-
Good idea!
Although some of our clients that we do SEO for aren't hosting their websites on our server and we don't have access to their server logs etc.
Was hoping for an automated dashboard like MOZ/Screaming Frog/ or A hrefs as mentioned above. Due to the amount of clients we have, opening up and running through all there Log files could be time consuming.
Cheers for the info though, may come in use in the future, or to someone else reading this
-
Hi
The best way I have found is to look in your server logs, its the only true place to find out what Google is doing on your site.
Download the logs and look at all the 404 errors - quite simple and depending on size of your logs can take you around 5 minutes worth a work - the longer time period you can analyse in your logs the better.
Thanks
Andy
-
Hi David.
Ahrefs.com offers that service: broken links.
Another way to do that search could be this: Downloading the historic backlinks list and with a mass checker, check where do they point nowdays. I've used GScraper and its option to crawl outbound links.
Best Luck.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Confused about repeated occurences of URL/essayorg/topic/ showing up as 404 errors in our site logs
Working on a Wordpress website, https://thedoctorwithin.comScanning the site’s 404 errors, I’m seeing a lot of searches for URL/essayorg/topic, coming from Bingbot, as well as other spiders (Google, OpensiteExlorer). We get at least 200 of these irrelevant requests per week. Seems like each topic that follows /essayorg/ is unique. Some include typos: /dissitation/Haven't done a verification to make sure the spiders are who they say they are, yet.Almost seems like there are many links ‘in the wild’ intended for Essay.Org that are being directed towards the site I’m working on.I've considered redirecting any requests for URL/essayorg/ to our sitemap… figuring that might encourage further spidering of actual site content. Is redirection to our sitemap xml file a good idea, or might doing so have unintended consequences? Interested in suggestions about why this might be occurring. Thank you.
Technical SEO | | linkjuiced0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Hey all -- ever seen a client with URLs that keep repeating the domain? Something like: client.com/client.com/client.com/subfolder-name. Any idea what glitch could cause that?
Hey all -- ever seen a client with URLs that keep repeating the domain? Something like: client.com/client.com/client.com/subfolder-name. Any idea what glitch could cause that?
Technical SEO | | TDC_SEO0 -
Disavow a big part of my external link profile
Hi There, With the latest penguin 3.0 algorithm update (on October 17th,) I noticed a drop in my rankings. Even though I didn’t receive any manual penalty because no messages have been found in WebMaster Tool, I suspect it is an algorithm penalty. For this reason, I definitively decided to clean-up my external link profile. **I am excluding it is a Panda 4.1 penalty because an extensive site structure review has been conducted quite recently. I collected external links from Webmaster Tool and Open Site Explorer. What I found is that 83% of my external links need to be disavowed because the links come either from poor directories or marketing articles that are evidently and specifically written for link building purposes. My questions are: 1) Shall an external link clean-up be set in place anyway although I didn’t receive any penalty message in order to prevent future problems with penguin algorithm? 2) Is it too dangerous to disavow 83% of external links? May such a manoeuvre destroy my actual rankings? Thanks in advance for you advices 🙂
Technical SEO | | Midleton0 -
404 error
Both SEOmoz and Google webmaster tools are returning over 4000 error 404.The majority or returned error URLs are for images, and all URLs end up with %20target=as shown belowimages/products/detail/AD9058RoundGlassTableChairs.jpg%20target=images/products/detail/BM921ModernRoundDiningTable.jpg%20target=images/products/detail/CR701506CappuccinoCoffeeTableSet.jpg%20target=any suggestions?RegardsTony
Technical SEO | | OCFurniture0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Redesign existing websites / worried about urls / mapping
Hi Guys, While redesigning existing websites that will have page name changes such as: example.com/products to be called example.com/solutions example.com/about-us to be called example.com/about should I 301 the old url to the new url. In the past I have not done this & I'm just wondering from an SEO point of view how bad is this? (On a scale of 1 to 10 how bad is this not 301ing urls, 10 being really bad & 1 being fine), Thanks.
Technical SEO | | Socialdude0