Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to find links to 404 pages?
-
I know that I used to be able to do this, but I can't seem to remember.
One of the sites I am working on has had a lot of pages moving around lately. I am sure some links got lost in the fray that I would like to recover, what is the easiest way to see links going to a domain that are pointing to 404 pages?
-
where is that little button next to my crawl warnings that lets me open urls, or explore links to that url using OSE?
-
Specifically in Open Site Explorer, check out the "Top Pages" tab to see if any of your top linked to pages are returning a 404. This tab is actually the first one I look at when running analysis of a site.
-
Sorry for the delay in my answer.
When you have detected all the 404 of your website, you can use the "Explore URL" search in Siteexplorer. If are still existing backlinks to those pages, Yahoo Siteexplorer will show them.
To be sure I just did a try with an 404 of a new client of mine, and just discovered that one 404 page was linked by a Yale University page... obviously I've just made an 301

-
As familiar as I am with Yahoo SiteExplorer I have never used it to find external links that go to pages that are no longer there. How can I do this with that tool?
-
Hello Spencer,
I recommend two tools
1. Xenu link sleuth (http://home.snafu.de/tilman/xenulink.html#Download)
2. Gsitecrawler ( http://gsitecrawler.com/en/download/)
Both will report all the linked pages throwing a 404 error and other status codes including "forbidden request", "no connection", "no such host" and more.
Hope this helps.
Sameer
-
Did you look into the Google Webmaster Tools already? There you can see them as well - of course not all. But you have to check from time to time - they don't show up all together. If you fix some perpaps some more will come up ...
-
Hi Spencer:
I don't know if this qualifies as the easiest way , but it ranks right up there:
-
You can use Open Site Explorer, but i suggest you to widen the discovery using also Yahoo! SiteExplorer
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to link to multiple location pages
I am a Magician and have multiple location pages for each county I cover. I currently have them linked off the menu under locations/ <county>and also in the footer</county> However I have heard that a link from the page is much stronger, so I am experimenting with removing the Menu & Footer link and just linking to these pages from within the content. It's not really a navigation item and most people come in through search to the right page. Am I diluting the link by having it in the Menu/Page and Footer? I read a long time ago that Google only considers the first link to a page and ignores the rest - is that the case? Thanks Roger https://www.rogerlapin.co.uk/
Technical SEO | | Rogerperk0 -
How to find orphan pages
Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!
Technical SEO | | KJH-HAC1 -
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
404 Error Pages being picked up as duplicate content
Hi, I recently noticed an increase in duplicate content, but all of the pages are 404 error pages. For instance, Moz site crawl says this page: https://www.allconnect.com/sc-internet/internet.html has 43 duplicates and all the duplicates are also 404 pages (https://www.allconnect.com/Coxstatic.html for instance is a duplicate of this page). Looking for insight on how to fix this issue, do I add an rel=canonical tag to these 60 error pages that points to the original error page? Thanks!
Technical SEO | | kfallconnect0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Updating inbound links vs. 301 redirecting the page they link to
Hi everyone, I'm preparing myself for a website redesign and finding conflicting information about inbound links and 301 redirects. If I have a URL (we'll say website.com/website) that is linked to by outside sources, should I get those outside sources to update their links when I change the URL to website.com/webpage? Or is it just as effective from a link juice perspective to simply 301 redirect the old page to the new page? Are there any other implications to this choice that I may want to consider? Thanks!
Technical SEO | | Liggins0 -
Product Pages Outranking Category Pages
Hi, We are noticing an issue where some product pages are outranking our relevant category pages for certain keywords. For a made up example, a "heavy duty widgets" product page might rank for the keyword phrase Heavy Duty Widgets, instead of our Heavy Duty Widgets category page appearing in the SERPs. We've noticed this happening primarily in cases where the name of the product page contains an at least partial match for the desired keyword phrase we want the category page to rank for. However, we've also found isolated cases where the specified keyword points to a completely irrelevent pages instead of the relevant category page. Has anyone encountered a similar issue before, or have any ideas as to what may cause this to happen? Let me know if more clarification of the question is needed. Thanks!
Technical SEO | | ShawnHerrick0 -
Thoughts about stub pages - 200 & noindex ok, or 404?
With large database/template driven websites it is often possible to get a lot of pages with no content on them. What are the current thoughts regarding these pages with no content, options; Return a 200 header code with noindex meta tag Return a 404 page & header code Something else? Thanks
Technical SEO | | slingshot0