Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What happens to crawled URLs subsequently blocked by robots.txt?
-
We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed.
I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page.
The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling.
Which is the better practice?
-
@aspenfasteners To my understanding, disallowing a page or folder in robots.txt does not remove pages from Google's index. It merely gives a directive to not crawl those pages/folders. In fact, when pages are accidentally indexed and one wants to remove them from the index, it is important to actually NOT disallow them in robots.txt, so that Google can crawl those pages and discover the meta NOINDEX tags on the pages. The meta NOINDEX tags are the directive to remove a page from the index, or to not index it in the first place. This is different than a robots.txt directive, whcih is intended to allow or disallow crawling. Crawling does not equal indexing.
So, you could keep the pages indexable, and simply block them in your robots.txt file, if you want. If they've already been indexed, they should not disappear quickly (they might, over time though). BUT if they haven't been indexed yet, this would prevent them from being discovered.
All of that said, from reading your notes, I don't think any of this is warranted. The speed at which Google discovers pages on a website is very fast. And existing indexed pages shouldn't really get in the way of new discovery. In fact, they might help the category pages be discovered, if they contain links to the categories.
I would create a categories sitemap xml file, link to that in your robots.txt, and let that do the work of prioritizing the categories for crawling/discovery and indexation.
-
@aspenfasteners to answer your question: "do we KNOW that Google will immediately de-index URL's blocked by robots.txt?"
Google will not immediately de-index URLs that are blocked by robots.txt, based on my experience. I've dealt with very similar situation but with much greater scale - around 8M automatically generated pages that got into Google index. It may take a year or more to de-index these pages completely. Of course, every case is different, but based on my understanding, if you block these low-quality product pages, Google will slowly start re-evaluating these pages, and it will start with the ones that get some traffic.
Here is what happens when Google re-evaluates your individual product pages:
When deciding, whether to keep a page in its index or not, Google takes into account multiple factors, and one of the most important ones is how many backlinks (both internal and external) are leading to a page. Other factors - content quality, if the page is similar or duplicate to another page, Core Web Vitals score, amount of your crawl budget, and, of course, external backlinks (which is irrelevant for your case).
If you are afraid of loosing some traffic that comes to these product pages, or you have other concerns, just do a smaller experiment: take a sample of 1000-2000 pages, block them in robots.txt or by adding meta robots "noindex, follow" directive, and observe Google's reaction in 1-6 weeks, depending on your crawl budget.
Another thing to check:
If you use Screaming Frog, it has a nice feature to show internal pagerank and the number of internal incoming links that lead to every page. As a rule of thumb, if an individual product page has at least 10 internal incoming links from canonicalized pages, there is a high probability it will get indexed.
-
@terentyev - sorry, can't edit my questions once submitted and I wait for approval (why?) the statement should read my question SHOULD be very specific, whereas my original question was much more general - you answered that question very nicely. Sorry for any misunderstanding
-
@terentyev thanks for the reply. We have no reason to believe these URL's are backlinked. These aren't consumer products that individual are interested in, our site is a wholesale B2B selling very narrow categories in bulk quantities typically for manufacturing. Therefore, almost zero chance for backlinks anywhere for something as specific as a particular size/material/package quantity of a product.
We have already initiated a canonicalization project started but we are stuck between two concerns from sales, 1) we can't wait for canonicalization (which is complex) we need sales now and 2) don't touch robots.txt because MAYBE the individual products are indexed.
So that is why my question is very specific - do we KNOW that Google will immediately de-index URL's blocked by robots.txt?
-
@aspenfasteners thanks for interesting question.
to summarize my understanding:- you have ~300K individual product pages, many of them are duplicates; eg. a single product can have multiple characteristics (eg. size or quantity) but the pages are essentially the same.
- your goal is to index 200 product categories that contain a collection of these products, and remove the low-quality duplicate individual pages from Google index in the long run.
- my assumption is that these 300K product pages have been historically accumulating some backlinks, which is one of the reasons why they are indexed.
If I am right about the 1 and 2, then you should not block these individual product pages, but rather add canonical URLs to them, which should point to the respective category page that you want to get indexed.
Once you have these canonicals implemented, you should wait for a few months or more for Google to pass the link equity to your 200 product category pages, and once it is done, you are free to block them from indexing on robots.txt + meta tag on the page itself, and maybe even x-robots-tag. The way how to block them - it is a different discussion. Let me know if you want to learn more on the best approach.
So, here is my checklist for this URL migration:
- add canonicals pointing from product pages to category pages.
- make sure that all category pages are well interlinked between each other, and the individual product pages are linked to several category pages (eg. a product A should be linked to category A, and also to similar categories B & C). As a rule of thumb, make sure that each category page has at least 10 incoming links from other category pages.
- Make sure that all these category pages are linked from your homepage
- Make sure that sitemap contains only self-canonicalized pages.
- Make sure that these category pages have good core web vitals metrics, compared to your competitors on SERP.
- In 2-3 months, when you see that Google indexes the category pages, and crawling of product pages have been reduced significantly, and the ranks of the category pages have gone up, it is ok to block these 300K pages from crawling.
As to manually submitting the categories by hand, I doubt it will help, especially if the product pages have a lot of backlinks. I've seen many cases when Google disregards the robots.txt directives if a page has good backlinks and traffic.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
Should I include URLs that are 301'd or only include 200 status URLs in my sitemap.xml?
I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?
Intermediate & Advanced SEO | | Jonathan.Smith0 -
Image URLs - best practice
Hi - I'm assuming image URL best practice follows same principles as non image URLs (not too many files and so on) - I notice alot of web devs putting photos in subdomains, so wonder if I'm missing something (I usually avoid subdomains like the plague)!
Intermediate & Advanced SEO | | McTaggart1 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Weird 404 URL Problem - domain name being placed at end of urls
Hey there. For some reason when doing crawl tests I'm finding pages with the domain name being tacked on the end and causing 404 errors.
Intermediate & Advanced SEO | | Jay328
For example: http://domainname.com/page-name/http://domainname.com This is happening to all pages, posts and even category type 1. Site is in Wordpress
2. Using Yoast SEO plugin Any suggestions? Thanks!0 -
Exact keyword URL or not?
Hi all, I have a quick question about the proper use of permalinks. Let's say that I have a website about sports and I want to create an internal page dedicated to shoes. I know that the keyword "shoe" has 15.000 monthly visits, while the keyword "shoes" has 1.000 monthly visits. How do I have to name the internal page? http://www.example.com/shoe or http://www.example.com/shoes (with a final 's')? I would think that by naming the URL http://www.example.com/shoes, the search engine would consider that page for the keywords "shoe" and "shoes", but I am not sure about it. Should I create a URL that only focuses on one specific keyword ("shoe", in this example) or a URL that may encompass more than one keyword ("shoe" and "shoes")? I hope this is clear. Thank you for your time and help. All best, Sal
Intermediate & Advanced SEO | | salvyy0 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560 -
Is it safe to redirect multiple URLs to a single URL?
Hi, I have an old Wordress website with about 300-400 original pages of content on it. All relating to my company's industry: travel in Africa. It's a legitimate site with travel stories, photos, advice etc. Nothing spammy about. No adverts on it. No affiliates. The site hasn't been updated for a couple of years and we no longer have a need for it. Many of the stories on it are quite out of date. The site has built up a modest Mozrank value over the last 5 years, and has a few hundreds organically achieved inbound links. Recently I set up a swanky new branded website on ExpressionEngine on a new domain. My intention is to: Shut down the old site Focus all attention on building up content on the new website Ask the people linking to the old site to my new site instead (I wonder how many will actually do so...) Where possible, setup a 301 redirect from pages on the old site to their closest match on the new site Setup a 301 redirect from the old site's home page to new site's homepage Sounds good, right? But there is one issue I need some advice on... The old site has about 100 pages that do not have a good match on the new site. These pages are outdated or inferior quality, so it doesn't really make sense to rewrite them and put them on the new site. I call these my "black sheep pages". So... for these "black sheep pages" should I (A) redirect the urls to the new site's homepage (B) redirect the urls the old site's home page (which in turn, redirects to the new site's homepage, or (C) not redirect the urls, and let them die a lonely 404 death? OPTION A: oldsite.com/page1.php -> newsite.com
Intermediate & Advanced SEO | | AndreVanKets
oldsite.com/page2.php -> newsite.com
oldsite.com/page3.php -> newsite.com
oldsite.com/page4.php -> newsite.com
oldsite.com/page5.php -> newsite.com
oldsite.com -> newsite.com OPTION B: oldsite.com/page1.php -> oldsite.com
oldsite.com/page2.php -> oldsite.com
oldsite.com/page3.php -> oldsite.com
oldsite.com/page4.php -> oldsite.com
oldsite.com/page5.php -> oldsite.com
oldsite.com -> newsite.com OPTION 😄 oldsite.com/page1.php : do not redirect, let page 404 and disappear forever
oldsite.com/page2.php : do not redirect, let page 404 and disappear forever
oldsite.com/page3.php : do not redirect, let page 404 and disappear forever
oldsite.com/page4.php : do not redirect, let page 404 and disappear forever
oldsite.com/page5.php : do not redirect, let page 404 and disappear forever
oldsite.com -> newsite.com My intuition tells me that Option A would pass the most "link juice" to my new site, but I am concerned that it could also be seen by Google as a spammy redirect technique. What would you do? Help 😐1