Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • SEO Q&A
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • Case Studies
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      What is your Brand Authority?
      Moz

      What is your Brand Authority?

      Check yours now
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • SEO Q&A

        Insights & discussions from an SEO community of 500,000+.

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • Case Studies

        Explore how Moz drives ROI with a proven track record of success.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. Soft 404's from pages blocked by robots.txt -- cause for concern?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Soft 404's from pages blocked by robots.txt -- cause for concern?

    Intermediate & Advanced SEO
    3
    6
    2928
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • nicole.healthline
      nicole.healthline last edited by

      We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages).

      Should we be concerned? Is there anything we can do about this?

      1 Reply Last reply Reply Quote 4
      • CleverPhD
        CleverPhD @CleverPhD last edited by

        Me too.  It was that video that helped to clear things up for me.  Then I could see when to use robots.txt vs the noindex meta tag.  It has made a big difference in how I manage sites that have large amounts of content that can be sorted in a huge number of ways.

        1 Reply Last reply Reply Quote 0
        • Highland
          Highland @CleverPhD last edited by

          Good stuff. I was always under the impression they still crawled them (otherwise, how would you know if the block was removed).

          1 Reply Last reply Reply Quote 0
          • CleverPhD
            CleverPhD @Highland last edited by

            Take a look at

            http://www.youtube.com/watch?v=KBdEwpRQRD0

            to see what I am talking about.

            Robots.txt does prevent crawling according to Matt Cutts.

            Highland CleverPhD 2 Replies Last reply Reply Quote 1
            • Highland
              Highland last edited by

              Robots.txt prevents indexation, not crawling. The good news is that Googlebot stops crawling 404s.

              CleverPhD 1 Reply Last reply Reply Quote 0
              • CleverPhD
                CleverPhD last edited by

                Just a couple of under the hood things to check.

                1. Are you sure your robots.txt is setup correctly. Check in GWT to see that Google is reading it.

                2. This may be a timing issue.  Errors take 30-60 days to drop out (as what I have seen) so did they show soft 404 and then you added them to robots.txt?

                If that was the case, this may be a sequence issue.  If Google finds a soft 404 (or some other error) then it comes back to spider and is not able to crawl the page due to robots.txt - it does not know what the current status of the page is so it may just leave the last status that it  found.

                1. I tend to see soft 404 for pages that you have a 301 redirect on where you have a many to one association.  In other words, you have a bunch of pages that are 301ing to a single page.  You may want to consider changing where some of the 301s redirect so that they going to a specific page vs an index page.

                2. If you have a page in robots.txt - you do not want them in Google, here is what I would do.   Show a 200 on that page but then put in the meta tags a noindex nofollow.

                http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710

                "When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it"

                Let Google spider it so that it can see the 200 code - you get rid of the soft 404 errors.  Then toss in the noindex nofollow meta tags to have the page removed from the Google index.  It sounds backwards that you have to let Google spider to get it to remove stuff, but it works it you walk through the logic.

                Good luck!

                1 Reply Last reply Reply Quote 1
                • 1 / 1
                • First post
                  Last post

                Got a burning SEO question?

                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                Start my free trial


                Browse Questions

                Explore more categories

                • Moz Tools

                  Chat with the community about the Moz tools.

                • SEO Tactics

                  Discuss the SEO process with fellow marketers

                • Community

                  Discuss industry events, jobs, and news!

                • Digital Marketing

                  Chat about tactics outside of SEO

                • Research & Trends

                  Dive into research and trends in the search industry.

                • Support

                  Connect on product support and feature requests.

                • See all categories

                Related Questions

                • Mat_C

                  Robots.txt blocked internal resources Wordpress

                  Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted.  I've created the following new one: User-agent: *
                  Allow: /
                  Disallow: /wp-admin/
                  Disallow: /wp-includes/
                  Disallow: /wp-content/plugins/
                  Disallow: /wp-content/cache/
                  Disallow: /wp-content/themes/
                  Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush,  I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!

                  Intermediate & Advanced SEO | | Mat_C
                  2
                • SDCMarketing

                  Change Google's version of Canonical link

                  Hi My website has millions of URLs and some of the URLs have duplicate versions. We did not set canonical all these years. Now we wanted to implement it  and fix all the technical SEO issues. I wanted to consolidate and redirect all the variations of a URL to the highest pageview version and use that as the canonical because all of these variations have the same content. While doing this, I found in Google search console that Google has already selected another variation of URL as canonical and not the highest pageview version. My questions: I have millions of URLs for which I have to do 301 and set canonical. How can I find all the canonical URLs that Google has autoselected? Search Console has a daily quota of 100 or something. Is it possible to override Google's version of Canonical? Meaning, if I set a variation as Canonical and it is different than what Google has already selected, will it change overtime in Search Console? Should I just do a 301 to highest pageview variation of the URL and not set canonicals at all? This way the canonical that Google auto selected might get redirected to the highest pageview variation of the URL. Any advice or help would be greatly appreciated.

                  Intermediate & Advanced SEO | | SDCMarketing
                  0
                • Andy.Drinkwater

                  How long to re-index a page after being blocked

                  Morning all! I am doing some research at the moment and am trying to find out, just roughly, how long you have ever had to wait to have a page re-indexed by Google. For this purpose, say you had blocked a page via meta noindex or disallowed access by robots.txt, and then opened it back up. No right or wrong answers, just after a few numbers 🙂 Cheers, -Andy

                  Intermediate & Advanced SEO | | Andy.Drinkwater
                  0
                • carlystemmer

                  404's - Do they impact search ranking/how do we get rid of them?

                  Hi, We recently ran the Moz website crawl report and saw a number of 404 pages from our site come back. These were returned as "high priority" issues to fix. My question is, how do 404's impact search ranking? From what Google support tells me, 404's are "normal" and not a big deal to fix, but if they are "high priority" shouldn't we be doing something to remove them? Also, if I do want to remove the pages, how would I go about doing so? Is it enough to go into Webmaster tools and list it as a link no to crawl anymore or do we need to do work from the website development side as well? Here are a couple of examples that came back..these are articles that were previously posted but we decided to close out: http://loyalty360.org/loyalty-management/september-2011/let-me-guessyour-loyalty-program-isnt-working http://loyalty360.org/resources/article/mark-johnson-speaks-at-motivation-show Thanks!

                  Intermediate & Advanced SEO | | carlystemmer
                  0
                • IceIcebaby

                  Baidu Spider appearing on robots.txt

                  Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider
                  User-agent: Baiduspider-video
                  User-agent: Baiduspider-image
                  Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
                  -Reed

                  Intermediate & Advanced SEO | | IceIcebaby
                  0
                • esiow2013

                  May know what's the meaning of these parameters in .htaccess?

                  Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
                  RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist

                  Intermediate & Advanced SEO | | esiow2013
                  1
                • M_D_Golden_Peak

                  Do 404 Pages from Broken Links Still Pass Link Equity?

                  Hi everyone, I've searched the Q&A section, and also Google, for about the past hour and couldn't find a clear answer on this. When inbound links point to a page that no longer exists, thus producing a 404 Error Page, is link equity/domain authority lost? We are migrating a large eCommerce website and have hundreds of pages with little to no traffic that have legacy 301 redirects pointing to their URLs. I'm trying to decide how necessary it is to keep these redirects. I'm not concerned about the page authority of the pages with little traffic...I'm concerned about overall domain authority of the site since that certainly plays a role in how the site ranks overall in Google (especially pages with no links pointing to them...perfect example is Amazon...thousands of pages with no external links that rank #1 in Google for their product name). Anyone have a clear answer? Thanks!

                  Intermediate & Advanced SEO | | M_D_Golden_Peak
                  0
                • Alex-Harford

                  Do 404 pages pass link juice? And best practices...

                  Last year Google said bad links to 404 pages wouldn't hurt your site. Could that still be the case in light of recent Google updates to try and combat spammy links and negative SEO? Can links to 404 pages benefit a website and pass link juice? I'd assume at the very least that any link juice will pass through links FROM the 404 page? Many websites have great 404 pages that get linked to: http://www.opensiteexplorer.org/links?site=http%3A%2F%2Fretardzone.com%2F404 - that was the first of four I checked from the "60 Really Cool...404 Pages" that actually returned the 404 HTTP Status! So apologies if you find the word 'retard' offensive. According to Open Site Explorer it has a decent Page Authority and number of backlinks - but it doesn't show in Google's SERPs. I'd never do it, but if you have a particularly well-linked to 404 page, is there an argument for giving it 200 OK Status? Finally, what are the best practices regarding 404s and address bar links? For example, if
                  www.examplesite.com/3rwdfs returns a 404 error, should I make that redirect to
                  www.examplesite.com/404 or leave it as is? Redirecting to www.examplesite.com/404 might not be user-friendly as people won't be able to correct the URL in the address bar. But if I have a great  404 page that people link to, I don't want links going to loads of random pages do I? Is either way considered best practice? If I did a 301 redirect I guess it would send the wrong signal to the crawlers? Should I use a 302 redirect, or even a 304 Not Modified redirect?

                  Intermediate & Advanced SEO | | Alex-Harford
                  1

                Get started with Moz Pro!

                Unlock the power of advanced SEO tools and data-driven insights.

                Start my free trial
                Products
                • Moz Pro
                • Moz Local
                • Moz API
                • Moz Data
                • STAT
                • Product Updates
                Moz Solutions
                • SMB Solutions
                • Agency Solutions
                • Enterprise Solutions
                Free SEO Tools
                • Domain Authority Checker
                • Link Explorer
                • Keyword Explorer
                • Competitive Research
                • Brand Authority Checker
                • MozBar Extension
                • MozCast
                Resources
                • Blog
                • SEO Learning Center
                • Help Hub
                • Beginner's Guide to SEO
                • How-to Guides
                • Moz Academy
                • API Docs
                About Moz
                • About
                • Team
                • Careers
                • Contact
                Why Moz
                • Case Studies
                • Testimonials
                Get Involved
                • Become an Affiliate
                • MozCon
                • Webinars
                • Practical Marketer Series
                • MozPod
                Connect with us

                Contact the Help team

                Join our newsletter
                Moz logo
                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                • Accessibility
                • Terms of Use
                • Privacy

                Looks like your connection to Moz was lost, please wait while we try to reconnect.