The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. SEO and Digital Marketing Forum
    2. Categories
    3. SEO Tactics
    4. Intermediate & Advanced SEO
    5. Wildcarding Robots.txt for Particular Word in URL

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Wildcarding Robots.txt for Particular Word in URL

    Intermediate & Advanced SEO
    6 4 2.1k
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • EvansHunt
      EvansHunt last edited by

      Hey All,

      So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it?

      We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all.

      In this case these are the lines I've added to the robots.txt

      Disallow: /*&viagra

      Disallow: /*&Viagra

      I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious.

      Thanks!

      Jon

      1 Reply Last reply Reply Quote 0
      • EvansHunt
        EvansHunt @ThompsonPaul last edited by

        Hey Paul,

        Great answer, for some reason it totally slipped my mind that robots.txt is a crawling directive and not an index one. Yes the pages return a 404 on the headers. I've grabbed a copy of the complete SERPS and will now manually disallow them.

        Thanks!

        Jon

        1 Reply Last reply Reply Quote 0
        • ThompsonPaul
          ThompsonPaul @ThompsonPaul last edited by

          Thank for the endorsement, Christy! Funny, I only just now saw Rand's recent WBF related to this topic, but pleased to see my answer lines up exactly with his info. 🙂

          P.

          1 Reply Last reply Reply Quote 0
          • ThompsonPaul
            ThompsonPaul last edited by

            You need to be aware, Jonathan, that there is absolutely nothing about a robots.txt disallow that will help remove a URL from the search engine indexes. Robots is a crawling directive, NOT an indexing directive. In fact, in most cases, blocking URLs in robots.txt will actually cause them to remain in the index even longer.

            I'm assuming you have cleaned up the site so the actual spam URLs no longer resolve. Those URLs should now result in a 404 error page. You must confirm they are actually returning the correct 404 code in the headers. As long as this is the case, it is a matter of waiting while the search engines crawl the spam URLs often enough to recognise they are really gone and remove them from the index. The problem with adding them to the robots.txt is that is actually telling the search engines NOT to crawl them, so they are unlikely to discover that they lead to 404s, hence they may remain in the index even longer.

            Unfortunately you can't use a no-index tag on the offending pages, because the pages should no longer exist on the site. I don't think even a careful implementation of a X-Robots noindex directive in htaccess would work, because the URLs should be resulting in a 404.

            Make certain the problem URLs return a clean 404, use the Google Search Console Remove URLs tool for as many of them as you can (for example you can request removal for entire directories, if the spam happened to be built that way), and then be patient for the rest. But do NOT block them in robots.txt - you'll just prolong the agony and waste your time.

            Hope that all makes sense?

            Paul

            ThompsonPaul EvansHunt 2 Replies Last reply Reply Quote 2
            • Martijn_Scheijbeler
              Martijn_Scheijbeler last edited by

              Hi Jon,

              Why not just: Disallow: /viagra

              1 Reply Last reply Reply Quote 0
              • LesleyPaone
                LesleyPaone last edited by

                Jon,

                I have never done it with a robots.txt, one easy why that I think you could do it would be on the page level. You could add a noindex nofollow to the page itself.

                You can generate it automatically too and have it fired depending on the url by using a substring search on the url as well. That will get them all for sure.

                1 Reply Last reply Reply Quote 1
                • 1 / 1
                • First post
                  Last post

                Got a burning SEO question?

                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                Start my free trial


                Explore more categories

                • Moz Tools

                  Chat with the community about the Moz tools.

                  Getting Started
                  Moz Pro
                  Moz Local
                  Moz Bar
                  API
                  What's New

                • SEO Tactics

                  Discuss the SEO process with fellow marketers

                  Content Development
                  Competitive Research
                  Keyword Research
                  Link Building
                  On-Page Optimization
                  Technical SEO
                  Reporting & Analytics
                  Intermediate & Advanced SEO
                  Image & Video Optimization
                  International SEO
                  Local SEO

                • Community

                  Discuss industry events, jobs, and news!

                  Moz Blog
                  Moz News
                  Industry News
                  Jobs and Opportunities
                  SEO Learn Center
                  Whiteboard Friday

                • Digital Marketing

                  Chat about tactics outside of SEO

                  Affiliate Marketing
                  Branding
                  Conversion Rate Optimization
                  Web Design
                  Paid Search Marketing
                  Social Media

                • Research & Trends

                  Dive into research and trends in the search industry.

                  SERP Trends
                  Search Behavior
                  Algorithm Updates
                  White Hat / Black Hat SEO
                  Other SEO Tools

                • Support

                  Connect on product support and feature requests.

                  Product Support
                  Feature Requests
                  Participate in User Research

                • See all categories

                • If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
                  Gabriele_Layoutweb
                  Gabriele_Layoutweb
                  0
                  3
                  821

                • Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
                  andyheath
                  andyheath
                  0
                  10
                  3.2k

                • Should I disallow all URL query strings/parameters in Robots.txt?
                  jmorehouse
                  jmorehouse
                  0
                  5
                  13.2k

                • Blocking Dynamic URLs with Robots.txt
                  AndrewY
                  AndrewY
                  1
                  4
                  6.9k

                Get started with Moz Pro!

                Unlock the power of advanced SEO tools and data-driven insights.

                Start my free trial
                Products
                • Moz Pro
                • Moz Local
                • Moz API
                • Moz Data
                • STAT
                • Product Updates
                Moz Solutions
                • SMB Solutions
                • Agency Solutions
                • Enterprise Solutions
                • Digital Marketers
                Free SEO Tools
                • Domain Authority Checker
                • Link Explorer
                • Keyword Explorer
                • Competitive Research
                • Brand Authority Checker
                • Local Citation Checker
                • MozBar Extension
                • MozCast
                Resources
                • Blog
                • SEO Learning Center
                • Help Hub
                • Beginner's Guide to SEO
                • How-to Guides
                • Moz Academy
                • API Docs
                About Moz
                • About
                • Team
                • Careers
                • Contact
                Why Moz
                • Case Studies
                • Testimonials
                Get Involved
                • Become an Affiliate
                • MozCon
                • Webinars
                • Practical Marketer Series
                • MozPod
                Connect with us

                Contact the Help team

                Join our newsletter
                Moz logo
                © 2021 - 2026 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                • Accessibility
                • Terms of Use
                • Privacy

                Looks like your connection to Moz was lost, please wait while we try to reconnect.