Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. Soft 404's from pages blocked by robots.txt -- cause for concern?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Soft 404's from pages blocked by robots.txt -- cause for concern?

    Intermediate & Advanced SEO
    3
    6
    2928
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • nicole.healthline
      nicole.healthline last edited by

      We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages).

      Should we be concerned? Is there anything we can do about this?

      1 Reply Last reply Reply Quote 4
      • CleverPhD
        CleverPhD @CleverPhD last edited by

        Me too.  It was that video that helped to clear things up for me.  Then I could see when to use robots.txt vs the noindex meta tag.  It has made a big difference in how I manage sites that have large amounts of content that can be sorted in a huge number of ways.

        1 Reply Last reply Reply Quote 0
        • Highland
          Highland @CleverPhD last edited by

          Good stuff. I was always under the impression they still crawled them (otherwise, how would you know if the block was removed).

          1 Reply Last reply Reply Quote 0
          • CleverPhD
            CleverPhD @Highland last edited by

            Take a look at

            http://www.youtube.com/watch?v=KBdEwpRQRD0

            to see what I am talking about.

            Robots.txt does prevent crawling according to Matt Cutts.

            Highland CleverPhD 2 Replies Last reply Reply Quote 1
            • Highland
              Highland last edited by

              Robots.txt prevents indexation, not crawling. The good news is that Googlebot stops crawling 404s.

              CleverPhD 1 Reply Last reply Reply Quote 0
              • CleverPhD
                CleverPhD last edited by

                Just a couple of under the hood things to check.

                1. Are you sure your robots.txt is setup correctly. Check in GWT to see that Google is reading it.

                2. This may be a timing issue.  Errors take 30-60 days to drop out (as what I have seen) so did they show soft 404 and then you added them to robots.txt?

                If that was the case, this may be a sequence issue.  If Google finds a soft 404 (or some other error) then it comes back to spider and is not able to crawl the page due to robots.txt - it does not know what the current status of the page is so it may just leave the last status that it  found.

                1. I tend to see soft 404 for pages that you have a 301 redirect on where you have a many to one association.  In other words, you have a bunch of pages that are 301ing to a single page.  You may want to consider changing where some of the 301s redirect so that they going to a specific page vs an index page.

                2. If you have a page in robots.txt - you do not want them in Google, here is what I would do.   Show a 200 on that page but then put in the meta tags a noindex nofollow.

                http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710

                "When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it"

                Let Google spider it so that it can see the 200 code - you get rid of the soft 404 errors.  Then toss in the noindex nofollow meta tags to have the page removed from the Google index.  It sounds backwards that you have to let Google spider to get it to remove stuff, but it works it you walk through the logic.

                Good luck!

                1 Reply Last reply Reply Quote 1
                • 1 / 1
                • First post
                  Last post

                Got a burning SEO question?

                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                Start my free trial


                Browse Questions

                Explore more categories

                • Moz Tools

                  Chat with the community about the Moz tools.

                • SEO Tactics

                  Discuss the SEO process with fellow marketers

                • Community

                  Discuss industry events, jobs, and news!

                • Digital Marketing

                  Chat about tactics outside of SEO

                • Research & Trends

                  Dive into research and trends in the search industry.

                • Support

                  Connect on product support and feature requests.

                • See all categories

                Related Questions

                • henrycabrown

                  Moved company 'Help Center' from Zendesk to Intercom, got lots of 404 errors. What now?

                  Howdy folks, excited to be part of the Moz community after lurking for years! I'm a few weeks into my new job (Digital Marketing at Rewind) and about 10 days ago the product team moved our Help Center from Zendesk to Intercom. Apparently the import went smoothly, but it's caused one problem I'm not really sure how to go about solving: https://help.rewind.io/hc/en-us/articles/***    is where all our articles used to sit https://help.rewind.io/***    is where all our articles now are So, for example, the following article has now moved as such: https://help.rewind.io/hc/en-us/articles/115001902152-Can-I-fast-forward-my-store-after-a-rewind- https://help.rewind.io/general-faqs-and-billing/frequently-asked-questions/can-i-fast-forward-my-store-after-a-rewind This has created a bunch of broken URLs in places like our Shopify/BigCommerce app listings, in our email drips, and in external resources etc. I've played whackamole cleaning many of these up, but these old URLs are still indexed by Google – we're up to 475 Crawl Errors in Search Console over the past week, all of which are 404s. I reached out to Intercom about this to see if they had something in place to help, but they just said my "best option is tracking down old links and setting up 301 redirects for those particular addressed". Browsing the Zendesk forms turned up some relevant-ish results, with the leading recommendation being to configure javascript redirects in the Zendesk document head (thread 1, thread 2, thread 3) of individual articles. I'm comfortable setting up 301 redirects on our website, but I'm in a bit over my head in trying to determine how I could do this with content that's hosted externally and sitting on a subdomain. I have access to our Zendesk admin, so I can go in and edit stuff there, but don't have experience with javascript redirects and have read that they might not be great for such a large scale redirection. Hopefully this is enough context for someone to provide guidance on how you think I should go about fixing things (or if there's even anything for me to do) but please let me know if there's more info I can provide. Thanks!

                  Intermediate & Advanced SEO | | henrycabrown
                  1
                • SDCMarketing

                  Change Google's version of Canonical link

                  Hi My website has millions of URLs and some of the URLs have duplicate versions. We did not set canonical all these years. Now we wanted to implement it  and fix all the technical SEO issues. I wanted to consolidate and redirect all the variations of a URL to the highest pageview version and use that as the canonical because all of these variations have the same content. While doing this, I found in Google search console that Google has already selected another variation of URL as canonical and not the highest pageview version. My questions: I have millions of URLs for which I have to do 301 and set canonical. How can I find all the canonical URLs that Google has autoselected? Search Console has a daily quota of 100 or something. Is it possible to override Google's version of Canonical? Meaning, if I set a variation as Canonical and it is different than what Google has already selected, will it change overtime in Search Console? Should I just do a 301 to highest pageview variation of the URL and not set canonicals at all? This way the canonical that Google auto selected might get redirected to the highest pageview variation of the URL. Any advice or help would be greatly appreciated.

                  Intermediate & Advanced SEO | | SDCMarketing
                  0
                • McTaggart

                  What does Disallow: /french-wines/?* actually do - robots.txt

                  Hello Mozzers - Just wondering what this robots.txt instruction means: Disallow: /french-wines/?* Does it stop Googlebot crawling and indexing URLs in that "French Wines" folder - specifically the URLs that include a question mark? Would it stop the crawling of deeper folders - e.g. /french-wines/rhone-region/ that include a question mark in their URL? I think this has been done to block URLs containing query strings. Thanks, Luke

                  Intermediate & Advanced SEO | | McTaggart
                  0
                • Malika1

                  If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?

                  Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika

                  Intermediate & Advanced SEO | | Malika1
                  1
                • alrockn

                  Chinese Sites Linking With Bizarre Keywords Creating 404's

                  Just ran a link profile, and have noticed for the first time many spammy Chinese sites linking to my site with spammy keywords such as "Buy Nike" or "Get Viagra".  Making matters worse, they're linking to pages that are creating 404's. Can anybody explain what's going on, and what I can do?

                  Intermediate & Advanced SEO | | alrockn
                  0
                • fablau

                  Robots.txt: how to exclude sub-directories correctly?

                  Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.

                  Intermediate & Advanced SEO | | fablau
                  1
                • NerdsOnCall

                  What's the deal with significantLinks?

                  http://schema.org/significantLink Schema.org has a definition for "non-navigation links that are clicked on the most." Presumably this means something like the big green buttons on Moz's homepage. But does anyone know how they affect anything? In http://moz.com/blog/schemaorg-a-new-approach-to-structured-data-for-seo#comment-142936, Jeremy Nelson says " It's quite possible that significant links will pass anchor text as well if a previous link to the page was set in navigation, effictively making obselete the first-link-counts rule, and I am interested in putting that to test." This is a pretty obscure comment but it's one of the only results I could find on the subject. Is this BS? I can't even make out what all of it is saying. So what's the deal with significantLinks and how can we use them to SEO?

                  Intermediate & Advanced SEO | | NerdsOnCall
                  0
                • esiow2013

                  May know what's the meaning of these parameters in .htaccess?

                  Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
                  RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist

                  Intermediate & Advanced SEO | | esiow2013
                  1

                Get started with Moz Pro!

                Unlock the power of advanced SEO tools and data-driven insights.

                Start my free trial
                Products
                • Moz Pro
                • Moz Local
                • Moz API
                • Moz Data
                • STAT
                • Product Updates
                Moz Solutions
                • SMB Solutions
                • Agency Solutions
                • Enterprise Solutions
                Free SEO Tools
                • Domain Authority Checker
                • Link Explorer
                • Keyword Explorer
                • Competitive Research
                • Brand Authority Checker
                • Local Citation Checker
                • MozBar Extension
                • MozCast
                Resources
                • Blog
                • SEO Learning Center
                • Help Hub
                • Beginner's Guide to SEO
                • How-to Guides
                • Moz Academy
                • API Docs
                About Moz
                • About
                • Team
                • Careers
                • Contact
                Why Moz
                • Case Studies
                • Testimonials
                Get Involved
                • Become an Affiliate
                • MozCon
                • Webinars
                • Practical Marketer Series
                • MozPod
                Connect with us

                Contact the Help team

                Join our newsletter
                Moz logo
                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                • Accessibility
                • Terms of Use
                • Privacy

                Looks like your connection to Moz was lost, please wait while we try to reconnect.