Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Save 36% now!
      Moz Pro

      Save 36% now!

      Sign up
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      Save 36% now!
      Moz Pro

      Save 36% now!

      Sign up
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Access 20 years of data with flexible pricing
      Moz API

      Access 20 years of data with flexible pricing

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. Robots.txt is blocking Wordpress Pages from Googlebot?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Robots.txt is blocking Wordpress Pages from Googlebot?

    Intermediate & Advanced SEO
    2
    4
    12168
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • ENSO
      ENSO last edited by

      I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot?

      1 Reply Last reply Reply Quote 0
      • Desiree-CP
        Desiree-CP @ENSO last edited by

        Delete everything under the following directives and you should be good.

        User-agent: Googlebot

        Disallow: /*/trackback

        Disallow: /*/feed

        Disallow: /*/comments

        Disallow: /?

        Disallow: /*?

        Disallow: /page/

        As a rule of thumb, it's not a good idea to use wild cards in your robots.txt file - you may be excluding an entire folder inadvertently.

        1 Reply Last reply Reply Quote 0
        • ENSO
          ENSO @Desiree-CP last edited by

          Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched.

          http://ensoplastics.com/theblog/?cat=743

          http://ensoplastics.com/theblog/?p=240

          These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes.

          Sitemap: http://www.ensobottles.com/blog/sitemap.xml

          User-agent: Googlebot

          Disallow: /*/trackback

          Disallow: /*/feed

          Disallow: /*/comments

          Disallow: /?

          Disallow: /*?

          Disallow: /page/
          User-agent: *

          Disallow: /cgi-bin/

          Disallow: /wp-admin/

          Disallow: /wp-includes/

          Disallow: /wp-content/plugins/

          Disallow: /wp-content/themes/

          Disallow: /trackback

          Disallow: /commentsDisallow: /feed

          Desiree-CP 1 Reply Last reply Reply Quote 0
          • Desiree-CP
            Desiree-CP last edited by

            I'm not sure to what extent your website is being blocked with the robots.txt file but it's pretty easy to diagnose.  You'll first need to identify and confirm that googlebot is being blocked by typing in your web browser ~> www.mywebsite.com/robots.txt

            If you see an entry such as "User-agent: *" or "User-agent: googlebot" being used in conjunction with "Disallow" then you know your website is being blocked with the robots.txt file.  Given your situation you'll need to go through a two step process.

            First, go into your wordpress plugin page and deactivate the plugin which generates your robots.txt file.  Second, login to the root folder of your server and look for the robots.txt file.  Lastly, change "Disallow" to "Allow" and that should work but you'll need to confirm by typing in the robots URL again.

            Given the limited information in your question I hope that helps.  If you run into any more issues don't hesitate to post them here.

            ENSO 1 Reply Last reply Reply Quote 0
            • 1 / 1
            • First post
              Last post

            Got a burning SEO question?

            Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


            Start my free trial


            Browse Questions

            Explore more categories

            • Moz Tools

              Chat with the community about the Moz tools.

            • SEO Tactics

              Discuss the SEO process with fellow marketers

            • Community

              Discuss industry events, jobs, and news!

            • Digital Marketing

              Chat about tactics outside of SEO

            • Research & Trends

              Dive into research and trends in the search industry.

            • Support

              Connect on product support and feature requests.

            • See all categories

            Related Questions

            • AspenFasteners

              What happens to crawled URLs subsequently blocked by robots.txt?

              We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice?

              Intermediate & Advanced SEO | | AspenFasteners
              1
            • McTaggart

              What does Disallow: /french-wines/?* actually do - robots.txt

              Hello Mozzers - Just wondering what this robots.txt instruction means: Disallow: /french-wines/?* Does it stop Googlebot crawling and indexing URLs in that "French Wines" folder - specifically the URLs that include a question mark? Would it stop the crawling of deeper folders - e.g. /french-wines/rhone-region/ that include a question mark in their URL? I think this has been done to block URLs containing query strings. Thanks, Luke

              Intermediate & Advanced SEO | | McTaggart
              0
            • Amor2005

              Which of these examples are doorway pages?

              Hi there, I am soon to launch a new platform/directory website, however, have a concern over doorway pages. I have read many articles on the difference between Doorway and Landing pages and do have a good understanding, however, am still very anxious that what I intend to do will be risking Google penalties. I have looked at other directory/platform websites and have noticed that a lot of them are still using doorway pages and are not getting penalised. So I was wondering if someone wouldn't mind kindly letting me know their opinion on which of the following examples are doorway pages and which are not so I can better understand what I can and cannot do? Example 1: When I Google 'piano lessons new york' and 'trumpet lessons new york' I get the following 'landing pages' in search: https://takelessons.com/new-york/piano-lessons https://takelessons.com/new-york/trumpet-lessons To me, the above pages are definitely doorway pages as they are very similar with content and text and are simply an intermediary step between the Google search and their listings pages for piano/trumpet teachers in New York. Is this correct? Example 2: When I Google 'piano lessons Sydney' I get presented with the following web page in search: http://www.musicteacher.com.au/directory/sydney-nsw/lessons/piano/ I would think that this is NOT a doorway page as the user has been taken directly to the search results page in the directory and the page doesn't seem to have been set up for the sole purpose of listing in search results for 'Piano Lessons in Sydney'. Example 3: When I Google 'pet minding Sydney' I get presented with the following two pages in search: https://www.madpaws.com.au/petsitters/Sydney-New-South-Wales?type=night&service=1&from=0&to=99&city=Sydney&state=New-South-Wales https://www.pawshake.com.au/petsitters/Sydney%252C%2520New%2520South%2520Wales%252C%2520Australia Like Example 2, I don't think these pages would be classified as doorway pages as they too direct to the search results page in the site directory instead of an intermediary page. What do you think? Thanks so much in advance for your expertise and help! Kind Regards, Adrian

              Intermediate & Advanced SEO | | Amor2005
              0
            • Andy.Drinkwater

              How long to re-index a page after being blocked

              Morning all! I am doing some research at the moment and am trying to find out, just roughly, how long you have ever had to wait to have a page re-indexed by Google. For this purpose, say you had blocked a page via meta noindex or disallowed access by robots.txt, and then opened it back up. No right or wrong answers, just after a few numbers 🙂 Cheers, -Andy

              Intermediate & Advanced SEO | | Andy.Drinkwater
              0
            • ThomasHarvey

              Large robots.txt file

              We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately) Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down. Does anybody have any experience with a robots.txt of that size?

              Intermediate & Advanced SEO | | ThomasHarvey
              0
            • EvansHunt

              Wildcarding Robots.txt for Particular Word in URL

              Hey All, So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it? We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all. In this case these are the lines I've added to the robots.txt Disallow: /*&viagra Disallow: /*&Viagra I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious. Thanks! Jon

              Intermediate & Advanced SEO | | EvansHunt
              0
            • TjeerdvZ

              Hreflang and paginated page

              Hi, I can not seem to find good documentation about the use of hreflang and paginated page when using rel=next , rel=prev
              Does any know where to find decent documentatio?, I could only find documentation about pagination and hreflang when using canonicals on the paginated page. I have doubts on what is the best option: The way tripadvisor does it:
              http://www.tripadvisor.nl/Hotels-g187139-oa390-Corsica-Hotels.html
              Each paginated page is referring to it's hreflang paginated page, for example: So should the hreflang refer to the pagined specific page or should it refer to the "1st" page? in this case:
              http://www.tripadvisor.nl/Hotels-g187139-Corsica-Hotels.html Looking foward to your suggestions.

              Intermediate & Advanced SEO | | TjeerdvZ
              0
            • khi5

              "noindex, follow" or "robots.txt" for thin content pages

              Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.

              Intermediate & Advanced SEO | | khi5
              0

            Get started with Moz Pro!

            Unlock the power of advanced SEO tools and data-driven insights.

            Start my free trial
            Products
            • Moz Pro
            • Moz Local
            • Moz API
            • Moz Data
            • STAT
            • Product Updates
            Moz Solutions
            • SMB Solutions
            • Agency Solutions
            • Enterprise Solutions
            • Digital Marketers
            Free SEO Tools
            • Domain Authority Checker
            • Link Explorer
            • Keyword Explorer
            • Competitive Research
            • Brand Authority Checker
            • Local Citation Checker
            • MozBar Extension
            • MozCast
            Resources
            • Blog
            • SEO Learning Center
            • Help Hub
            • Beginner's Guide to SEO
            • How-to Guides
            • Moz Academy
            • API Docs
            About Moz
            • About
            • Team
            • Careers
            • Contact
            Why Moz
            • Case Studies
            • Testimonials
            Get Involved
            • Become an Affiliate
            • MozCon
            • Webinars
            • Practical Marketer Series
            • MozPod
            Connect with us

            Contact the Help team

            Join our newsletter
            Moz logo
            © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
            • Accessibility
            • Terms of Use
            • Privacy

            Looks like your connection to Moz was lost, please wait while we try to reconnect.