Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • SEO Q&A
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • Case Studies
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • Case Studies

        Explore how Moz drives ROI with a proven track record of success.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. Sanity Check: NoIndexing a Boatload of URLs

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Sanity Check: NoIndexing a Boatload of URLs

    Intermediate & Advanced SEO
    3
    5
    760
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • 94501
      94501 last edited by

      Hi,

      I'm working with a Shopify site that has about 10x more URLs in Google's index than it really ought to. This equals thousands of urls bloating the index. Shopify makes it super easy to make endless new collections of products, where none of the new collections has any new content... just a new mix of products. Over time, this makes for a ton of duplicate content.

      My response, aside from making other new/unique content, is to select some choice collections with KW/topic opportunities in organic and add unique content to those pages. At the same time, noindexing the other 90% of excess collections pages.

      The thing is there's evidently no method that I could find of just uploading a list of urls to Shopify to tag noindex. And, it's too time consuming to do this one url at a time, so I wrote a little script to add a noindex tag (not nofollow) to pages that share various identical title tags, since many of them do. This saves some time, but I have to be careful to not inadvertently noindex a page I want to keep.

      Here are my questions:

      • Is this what you would do? To me it seems a little crazy that I have to do this by title tag, although faster than one at a time.

      • Would you follow it up with a deindex request (one url at a time) with Google or just let Google figure it out over time?

      • Are there any potential negative side effects from noindexing 90% of what Google is already aware of?

      • Any additional ideas?

      Thanks! Best... Mike

      1 Reply Last reply Reply Quote 0
      • Nigel_Carr
        Nigel_Carr last edited by

        Hi Michael

        The problem you have is the very low value content that exists on all of those pages and the complete impossibility of writing any unique Titles, Descriptions and content. There are just too many of them.

        With a footwear client of mine I no indexed a huge slug of tags taking the page count down by about 25% - we saw an immediate 22% increase in organic traffic in the first month. (March 18th 2017 - April 17th 2017) the duplicates were all size and colour related. Since canonicalising (I'm English lol) more content and taking the site from 25,000 pages to around 15,000 the site is now 76% ahead of last year for organics.  This is real measurable change.

        Now the arguments:

        Canonicalisation

        How are you going to canonicalise 10,000+ pages ? unless you have some kind of magic bullet you are not going to be able to but lets look at the logic.

        Say we have a page of Widgets (brand) and they come in 7 sizes. When the range is fully in stock all of the brand/size pages will be identical to the brand page, apart from the title & description. So it would make sense to canonicalise back to the brand. Even when sizes started to run out, all of the sizes will be on the brand page. So size is a subset of the brand page.

        Similar but not the same for colour. If colour is a tag then every colour sorted page will be on the brand page. So really they are the same page - just a slimmer selection. Now I accept that the brand page will contain all colours as it did all sizes but the similarity is so great - 95 % of the content being the same apart from the colour, that it makes sense to call them the same.

        So for me Canonicalisation would be the way to go but it's just not possible as there are too many of them.

        Noindex

        The upside of noindex is that it is generally easier to put the noindex tag on the page as there is no URL to tag. The downside is that the page is then not indexed in Google so you lose a little  juice - I would argue by the way that the chances of being found in Google for a size page is extremely slim, less than 2% of visits came from size pages before we junked them and most of those were from a newsletter so reality is <1% not worth bothering about You could leave off the nofollow so that Google crawls through all of the links on the pages - the better option.

        Considering your problem and having experience of a number of sites with the same problem Noindex is your solution.

        I hope that helps

        Kind Regards

        Nigel - Carousel Projects.

        1 Reply Last reply Reply Quote 2
        • 94501
          94501 last edited by

          Hi Chris & Nigel,

          Thank you for the considered responses. Good points about canonicalizing. A part I find frustrating is that the shared title tag across dozens or hundreds of pages will be across many different products/groups of products. So, the title tag is not a solid way to group canonicals.

          Since the url patterns vary, I don't see how I could group these by which dozens or hundreds canonicalize to which one page, let alone make the change in Shopify other than one page at a time. My understanding is that this title tag manipulation is the only handle Shopify gives for making these bulk changes.

          Gah!

          So, here are my follow up questions:

          • How big of a negative is this in it's as-is state and how much better will noindexing most of the 90% make it Google Organic-wise? I ask because even the BS title tag to noindex project is a huge time suck.

          • If more is ever revealed about how to more efficiently group and canonicalize in Shopify, would adding the canonical after noindexing capture that lost authority later or would the previous noindex have irretrievably lost that?

          • Given all that, would you continue as I am?

          Thanks! Best... Mike

          1 Reply Last reply Reply Quote 1
          • Nigel_Carr
            Nigel_Carr last edited by

            Hi Mike

            I see this a lot with sites that have a ton of tag groups. One site I am working on has 50,000 pages in Google caused by tags appending themselves to every version of a URL, the site only has 400 products. Example

            Site/size-4
            Site/womens/size-4
            Site/womens/boots/size-4
            Site/womens/boots/ankle/size-4
            Site/womens/clarks/boots/size-4

            Etc etc - If there are other tags like colour and features, this can cause a huge 3 dimensional matrix of additional pages that can slow down the crawl of the site - Google may not crawl all of the site as a result.

            If it's possible to canonicalse then that is the best option as juice and follows are retained - very often it would be the page with the tag lopped off that the tag should cite.

            In extreme circumstances I would consider noindexing the pages as they offer very skinny content and rubbish Meta because it's impossible to handle them individually. I have seen significant improvement in organics as a result.

            Personally I don't think it's enough to simply leave Google to figure it out although I have seen some sites with very high DA get away with it.

            To be honest I am pretty shocked that Shopify doesn't have a feature to cope with this

            Regards

            Nigel

            Carousel Projects.

            1 Reply Last reply Reply Quote 1
            • CopyChrisSEO
              CopyChrisSEO last edited by

              Hello Michael Johnson and Mozzers,

              I have seen Shopify do this a few times, though I do not have clients on that particular platform at the moment. It is frustrating. You're right to want to resolve this issue. Between duplicate content, authority conflicts, and an inflated crawl budget, one issue or another is bound to hold back site performance.

              Is this what you would do? Not immediately, no. I want to see those pages canonicalized. That way, your preferred pages get all the juice back from their respective canonical link. Is this an option for you?

              **Deindex request... and s_ide effects?**_ Canonical tags would make these part irrelevant (yay less work!). To be thorough though: I'd let Google figure it out unless you have strong evidence your crawl budget is maxed. And I don't see any negative side effects from noindexing duplicate content. If worse comes to worse, you have a good plan.

              Shape that content,
              CopyChrisSEO and the Vizergy Team

              1 Reply Last reply Reply Quote 0
              • 1 / 1
              • First post
                Last post

              Got a burning SEO question?

              Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


              Start my free trial


              Browse Questions

              Explore more categories

              • Moz Tools

                Chat with the community about the Moz tools.

              • SEO Tactics

                Discuss the SEO process with fellow marketers

              • Community

                Discuss industry events, jobs, and news!

              • Digital Marketing

                Chat about tactics outside of SEO

              • Research & Trends

                Dive into research and trends in the search industry.

              • Support

                Connect on product support and feature requests.

              • See all categories

              Related Questions

              • viatrading1

                Faceted Navigation URLs Best Practices

                Hi, We are developing new Products Pages with faceted filters. You can see it here: https://www.viatrading.com/wholesale-products/ We have a feature allowing to Order By and Group By, which alters the order of all products. There will also be the option to view Products as a table, which will contain same products but with different design and maybe slightly different content of each product. All this will happen without changing the URL, https://www.viatrading.com/all/ Is this the best practice? Thanks,

                Intermediate & Advanced SEO | | viatrading1
                0
              • HappyJackJr

                Help with facet URLs in Magento

                Hi Guys, Wondering if I can get some technical help here... We have our site britishbraces.co.uk , built in Magento. As per eCommerce sites, we have paginated pages throughout. These have rel=next/prev implemented but not correctly ( as it is not in is it in ) - this fix is in process. Our canonicals are currently incorrect as far as I believe, as even when content is filtered, the canonical takes you back to the first page URL. For example, http://www.britishbraces.co.uk/braces/x-style.html?ajaxcatalog=true&brand=380&max=51.19&min=31.19 Canonical to... http://www.britishbraces.co.uk/braces/x-style.html Which I understand to be incorrect. As I want the coloured filtered pages to be indexed ( due to search volume for colour related queries ), but I don't want the price filtered pages to be indexed - I am unsure how to implement the solution? As I understand, because rel=next/prev implemented ( with no View All page ), the rel=canonical is not necessary as Google understands page 1 is the first page in the series. Therefore, once a user has filtered by colour, there should then be a canonical pointing to the coloured filter URL? ( e.g. /product/black ) But when a user filters by price, there should be noindex on those URLs ? Or can this be blocked in robots.txt prior? My head is a little confused here and I know we have an issue because our amount of indexed pages is increasing day by day but to no solution of the facet urls. Can anybody help - apologies in advance if I have confused the matter. Thanks

                Intermediate & Advanced SEO | | HappyJackJr
                0
              • Jonathan.Smith

                Should I include URLs that are 301'd or only include 200 status URLs in my sitemap.xml?

                I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?

                Intermediate & Advanced SEO | | Jonathan.Smith
                0
              • boostaman

                How to check if the page is indexable for SEs?

                Hi, I'm building the extension for Chrome, which should show me the status of the indexability of the page I'm on. So, I need to know all the methods to check if the page has the potential to be crawled and indexed by a Search Engines. I've come up with a few methods: Check the URL in robots.txt file (if it's not disallowed) Check page metas (if there are not noindex meta) Check if page is the same for unregistered users (for those pages only available for registered users of the site) Are there any more methods to check if a particular page is indexable (or not closed for indexation) by Search Engines? Thanks in advance!

                Intermediate & Advanced SEO | | boostaman
                0
              • McTaggart

                Image URLs - best practice

                Hi - I'm assuming image URL best practice follows same principles as non image URLs (not too many files and so on) - I notice alot of web devs putting photos in subdomains, so wonder if I'm missing something (I usually avoid subdomains like the plague)!

                Intermediate & Advanced SEO | | McTaggart
                1
              • browndoginteractive

                Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)

                Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
                2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

                Intermediate & Advanced SEO | | browndoginteractive
                0
              • komeksimas

                Where to put a page ID in a URL?

                Hello, My company is going to change URLs to example.com/category or example.com/product. When we will change the URLs to product or category pages somehow we have to check whether the requested page is from category table in DB or from products table (this gives much speed to page load time). So we have to choose how to make the different product and category pages.
                Programmers said that we need to insert id to URL. So the question is: Which is the better way to place an id to an URL? example.com/product-name?id=111 example.com/product-name/111 example.com/product_name-111 Or maybe we should use some other punctuation mark to separate id from product name? p.s. I have read Dynamic URLs vs. static URLs by Google and it still didn't answered which is the best for all of the pages. Somehow others solve this problem by typing only the names to the URL, but could anyone tell what that technology should be?

                Intermediate & Advanced SEO | | komeksimas
                0
              • WebMarketingandDesign

                Multiple URLs for the same page

                I am working with a client and recently discovered that they have several URLs that go to the same page. http://www.maps.com/FunFacts.aspx
                http://www.maps.com/funfacts.aspx
                http://www.maps.com/FunFacts.aspx?nav=FF
                http://www.maps.com/FunFacts.aspx?nav=FS
                http://www.maps.com/funfacts.aspx?nav=FF
                http://www.maps.com/funfacts.aspx?nav=ffhttp://www.maps.com/FunFacts.aspx?nav=MShttp://www.maps.com/funfacts.aspx?nav=
                http://www.maps.com/FunFacts.aspx?nav=FF#
                http://www.maps.com/FunFacts
                http://www.maps.com/funfacts.aspx?.nav=FF I am afraid this is happening all over the site. So, my question is: Is this hurting the SEO and how? If so what is the best way to go about fixing this problem? Thanks for your help!

                Intermediate & Advanced SEO | | WebMarketingandDesign
                0

              Get started with Moz Pro!

              Unlock the power of advanced SEO tools and data-driven insights.

              Start my free trial
              Products
              • Moz Pro
              • Moz Local
              • Moz API
              • Moz Data
              • STAT
              • Product Updates
              Moz Solutions
              • SMB Solutions
              • Agency Solutions
              • Enterprise Solutions
              Free SEO Tools
              • Domain Authority Checker
              • Link Explorer
              • Keyword Explorer
              • Competitive Research
              • Brand Authority Checker
              • Local Citation Checker
              • MozBar Extension
              • MozCast
              Resources
              • Blog
              • SEO Learning Center
              • Help Hub
              • Beginner's Guide to SEO
              • How-to Guides
              • Moz Academy
              • API Docs
              About Moz
              • About
              • Team
              • Careers
              • Contact
              Why Moz
              • Case Studies
              • Testimonials
              Get Involved
              • Become an Affiliate
              • MozCon
              • Webinars
              • Practical Marketer Series
              • MozPod
              Connect with us

              Contact the Help team

              Join our newsletter
              Moz logo
              © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
              • Accessibility
              • Terms of Use
              • Privacy

              Looks like your connection to Moz was lost, please wait while we try to reconnect.