The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. SEO and Digital Marketing Forum
    2. Categories
    3. SEO Tactics
    4. Intermediate & Advanced SEO
    5. Meta NoIndex tag and Robots Disallow

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Meta NoIndex tag and Robots Disallow

    Intermediate & Advanced SEO
    5 2 2.1k
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • bjs2010
      bjs2010 last edited by

      Hi all,

      I hope you can spend some time to answer my first of a few questions 🙂

      We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS!

      Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination).

      After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
      "There is no information about this page because it is blocked by robots.txt"

      So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt.

      So coming to my question.

      1. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index?

      2. Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index?

      I thought Robots would stop and prevent indexation? But I've read this:
      "Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”.

      I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index.

      Thanks!

      B

      1 Reply Last reply Reply Quote 0
      • ThompsonPaul
        ThompsonPaul @bjs2010 last edited by

        There's no real way to estimate how long the re-crawl will take, Ben. You can get a bit of an idea by looking at the crawl rate reported in Google Webmaster Tools.

        Yes, asking for a page fetch then submitting with linked pages for each of the main website sections can help speed up the crawl discovery. In addition, make sure you've submitted a current sitemap and it's getting found correctly (also reported in GWT) You should also do the same in Bing Webmaster Tools. Too many sites forget about optimizing for Bing - even if it's only 20% of Google's traffic, there's no point throwing it away.

        Lastly, earning some new links to different sections of the site is another great signal. This can often be effectively & quickly done using social media - especially Google+ as it gets crawled very quickly.

        As far as your other question - yes, once you get the unwanted URLs out of the index, you can add the robots.txt disallow back in to optimise your crawl budget. I would strongly recommend you leave the meta-robots no-index tag in place though as a "belt & suspenders" approach to keep pages linking into those unwanted pages from triggering a re-indexing. It's OK to have both in place as long as the de-indexing has already been accomplished, as we've discussed.

        Hope that answer your questions?

        Paul

        1 Reply Last reply Reply Quote 0
        • bjs2010
          bjs2010 @bjs2010 last edited by

          So once Google has started to see the meta-noindex and is slowly deindexing pages, once that is done, I would like to block it from crawling them with a robots.txt to conserve my crawl budget.

          But, there are still internal links on the site that point to these URL´s - would they get back into the index in this case?

          1 Reply Last reply Reply Quote 0
          • bjs2010
            bjs2010 @ThompsonPaul last edited by

            Hi Paul,

            Thank you for your detailed answer - so I'm not going crazy 🙂

            I did try with canonicals but then realized they are more of a suggestion as opposed to a directive and I am still correcting a lot of dupe content and 404's so I am imagining that Google view's the site as "these guys don't know what they are doing' so may have ignored the canonical suggestion.

            So what I have done is remove the robots block on the pages I want de-indexed and add in meta noindex, follow on these pages - From what you are saying, they should naturally de-index, after which, I will put the robots.txt block back on to keep my crawl budget spent on better areas of the site.

            How long in your opinion can it take for Googlebot to de-index the pages? Can I help it along at all to speed up? Fetch page and linking pages as Googlebot?

            Thanks again,

            Ben

            bjs2010 ThompsonPaul 2 Replies Last reply Reply Quote 0
            • ThompsonPaul
              ThompsonPaul last edited by

              You're right to be confused, B. The terminology is unfortunate and misleading.

              To answer your questions

              1. Yes

              2. Yes.

              A disallow in robots.txt does nothing to remove already-indexed pages. That's not its purpose. Its only purpose is to tell the search crawlers not to waste their time crawling those pages. Even if pages have been blocked in robots, they will remain in the index if already there. Even if never crawled, and blocked in robots.txt, they can still end up indexed if some other indexed page links to them and the crawlers find those pages by following links. Again, nothing in a robots.txt disallow tells the engines to remove a page from the index, just not to waste time crawling it.

              Put another way, the robots.txt disallow directive only disallows crawling - it says nothing about what to do if the page gets into the index in other ways.

              The meta-robots no-index tag however explicitly states to the crawler "if you arrive at this page, do not add it to the index. If it is already in the index, remove it".

              And yea - as you suspected - if pages are blocked in robots.txt, the crawler obeys and doesn't visit those pages So it can't discover the no-index command to drop them from the index. Thus the only way a page could get dropped is if a crawler followed a link from an external site and discovered the page that way. A very inefficient way of trying to get all those pages out of the index.

              Bottom line - robots.txt is never the correct tool to deal with duplicate content issues. It's sole purpose is to keep the crawlers from wasting time on unimportant pages so they can spend more time finding (and therefore indexing) more important pages.

              The three tools for dealing with duplicate content are meta-robots no-index tags in a page header, 301 redirects, and canonical tags. Which one to use depends on the architecture of your site, your intended purpose, and the site's technical limitations.

              Hope that makes sense?

              Paul

              bjs2010 1 Reply Last reply Reply Quote 1
              • 1 / 1
              • First post
                Last post

              Got a burning SEO question?

              Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


              Start my free trial


              Explore more categories

              • Moz Tools

                Chat with the community about the Moz tools.

                Getting Started
                Moz Pro
                Moz Local
                Moz Bar
                API
                What's New

              • SEO Tactics

                Discuss the SEO process with fellow marketers

                Content Development
                Competitive Research
                Keyword Research
                Link Building
                On-Page Optimization
                Technical SEO
                Reporting & Analytics
                Intermediate & Advanced SEO
                Image & Video Optimization
                International SEO
                Local SEO

              • Community

                Discuss industry events, jobs, and news!

                Moz Blog
                Moz News
                Industry News
                Jobs and Opportunities
                SEO Learn Center
                Whiteboard Friday

              • Digital Marketing

                Chat about tactics outside of SEO

                Affiliate Marketing
                Branding
                Conversion Rate Optimization
                Web Design
                Paid Search Marketing
                Social Media

              • Research & Trends

                Dive into research and trends in the search industry.

                SERP Trends
                Search Behavior
                Algorithm Updates
                White Hat / Black Hat SEO
                Other SEO Tools

              • Support

                Connect on product support and feature requests.

                Product Support
                Feature Requests
                Participate in User Research

              • See all categories

              • No index detected in robots meta tag GSC issue_Help Please
                bgvsiteadmin
                bgvsiteadmin
                1
                3
                939

              • Meta Robot Tag:Index, Follow, Noodp, Noydir
                Kingalan1
                Kingalan1
                0
                7
                118.6k

              • Should I be using meta robots tags on thank you pages with little content?
                GSO
                GSO
                0
                3
                2.1k

              Get started with Moz Pro!

              Unlock the power of advanced SEO tools and data-driven insights.

              Start my free trial
              Products
              • Moz Pro
              • Moz Local
              • Moz API
              • Moz Data
              • STAT
              • Product Updates
              Moz Solutions
              • SMB Solutions
              • Agency Solutions
              • Enterprise Solutions
              • Digital Marketers
              Free SEO Tools
              • Domain Authority Checker
              • Link Explorer
              • Keyword Explorer
              • Competitive Research
              • Brand Authority Checker
              • Local Citation Checker
              • MozBar Extension
              • MozCast
              Resources
              • Blog
              • SEO Learning Center
              • Help Hub
              • Beginner's Guide to SEO
              • How-to Guides
              • Moz Academy
              • API Docs
              About Moz
              • About
              • Team
              • Careers
              • Contact
              Why Moz
              • Case Studies
              • Testimonials
              Get Involved
              • Become an Affiliate
              • MozCon
              • Webinars
              • Practical Marketer Series
              • MozPod
              Connect with us

              Contact the Help team

              Join our newsletter
              Moz logo
              © 2021 - 2026 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
              • Accessibility
              • Terms of Use
              • Privacy

              Looks like your connection to Moz was lost, please wait while we try to reconnect.