How do i block an entire category/directory with robots.txt?
-
Anyone has any idea how to block an entire product category, including all the products in that category using the robots.txt file? I'm using woocommerce in wordpress and i'd like to prevent bots from crawling every single one of products urls for now.
The confusing part right now is that i have several different url structures linking to every single one of my products for example www.mystore.com/all-products, www.mystore.com/product-category, etc etc.
I'm not really sure how i'd type it into the robots.txt file, or where to place the file.
any help would be appreciated thanks
-
Thanks for the detailed answer, i will give it a try!
-
Hi
This should do it, you place the robots.txt in the root directory of your site.
User-agent: * Disallow: /product-category/
You can check out some more examples here: http://www.seomoz.org/learn-seo/robotstxt
As for the multiple urls linking to the same pages, you will just need to check all possible variants and make sure you have them covered in the robots.txt file.
Google webmaster tools has a page where you can use to check if the robots.txt file is doing what you expect it to do (under Health -> Blocked Urls).
It might be easier to block the pages with a meta tag as described in the link above if you are running a plugin allowing this, that should take care of all the different url structures also.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help recover lost traffic (70%) from robots.txt error.
Our site is a company information site with 15 million indexed pages (mostly company profiles). Recently we had an issue with a server that we replaced, and in the processes mistakenly copied the robots.txt block from the staging server to a live server. By the time we realized the error, we lost 2/3 of our indexed pages and a comparable amount of traffic. Apparently this error took place on 4/7/19, and was corrected two weeks later. We have submitted new sitemaps to Google and asked them to validate the fix approximately a week ago. Given the close to 10 million pages that need to be validated, so far we have not seen any meaningful change. Will we ever get this traffic back? How long will it take? Any assistance will be greatly appreciated. On another note, these indexed pages were never migrated to SSL for fear of losing traffic. If we have already lost the traffic and/or if it is going to take a long time to recover, should we migrate these pages to SSL? Thanks,
On-Page Optimization | | akin671 -
Updating Old Content at Scale - Any Danger from a Google Penalty/Spam Perspective?
We've read a lot about the power of updating old content (making it more relevant for today, finding other ways to add value to it) and republishing (Here I mean changing the publish date from the original publish date to today's date - not publishing on other sites). I'm wondering if there is any danger of doing this at scale (designating a few months out of the year where we don't publish brand-new content but instead focus on taking our old blog posts, updating them, and changing the publish date - ~15 posts/month). We have a huge archive of old posts we believe we can add value to and publish anew to benefit our community/organic traffic visitors. It seems like we could add a lot of value to readers by doing this, but I'm a little worried this might somehow be seen by Google as manipulative/spammy/something that could otherwise get us in trouble. Does anyone have experience doing this or have thoughts on whether this might somehow be dangerous to do? Thanks Moz community!
On-Page Optimization | | paulz9990 -
URL Structure on Category Pages
Hi, Currently, we having the following URL Structure o our product pages: All Products Pages: www.viatrading.com/wholesale/283/All_Products.html Category Page: www.viatrading.com/wholesale/4/Clothing.html Product Page: www.viatrading.com/wholesale/product/LOAD-HE-WOM/Assorted-High-End-Women-Clothing-Lots.html?cid=4 Since we are going to use another frontend system, we are thinking about re-working on this URL Structure, using something like this: All Products Pages: www.viatrading.com/wholesale-products/ Category Page: www.viatrading.com/wholesale-products/category/ Product Page: www.viatrading.com/wholesale-products/category/product-title/ I understand this is better for SEO and user experience. However, we already have good traffic on the current URL Structure. Should we use same left-side filters on Category Pages as in All Products Page? Since we are using Faceted Navigation, when users filter the Category (e.g. Clothing) they will see same page as Clothing Category Page. Is that an issue for Duplicate Content? Since we are a wholesale company - I understand is using "/wholesale/products/" in URL for all product pages a good idea? If so, should we avoid word "wholesale" in product-title to avoid repeated word in URL? For us, SKU in URL helps the company employees and maybe some clients identify the link. However, what do you think of using the SEO-friendly product-title, and 301 redirect it to www.viatrading.com/BRTA-LN-DISHRACKS/, so 1st link is only used by company members and Canonicalized 2nd is the only one seen by general public? Thank you,
On-Page Optimization | | viatrading10 -
Can Robots.txt on Root Domain override a Robots.txt on a Sub Domain?
We currently have beta sites on sub-domains of our own domain. We have had issues where people forget to change the Robots.txt and these non-relevant beta sites get indexed by search engines (nightmare). We are going to move all of these beta sites to a new domain that we disallow all in the root of the domain. If we put fully configured Robots.txt on these sub-domains (that are ready to go live and open for crawling by the search engines) is there a way for the Robots.txt in the root domain to override the Robots.txt in these sub-domains? Apologies if this is unclear. I know we can handle this relatively easy by changing the Robots.txt in the sub-domain on going live but due to a few instances where people have forgotten I want to reduce the chance of human error! Cheers, Dave.
On-Page Optimization | | davelane.verve0 -
Any idea how Google is doing this? Is it schematic? http://techcrunch.com/2014/02/28/google-adds-full-restaurant-menus-to-its-search-results-pages/
Google is now showing menus on select searches. Any idea how they are getting this information? I would like to make sure my clients get visibility this way.
On-Page Optimization | | Ron_McCabe0 -
Is it still worth changing a url with half the pages target keyphrase in to the entire phrase still ?
Hi If a pages url has half the pages target keyphrase (i.e. 1 word instead of 2) is it still worth changing to include entire keyphrase (2 words) given need to then add 301 redirects etc after ? If it was a new page then I would definately include full keyphrase but the page is a few months old and has quite high page authority as is (i know a 301 should transfer most authority) but given this page and other sub pages would also need to be 301'd if this change occurs and the dev time/cost that would incurred/charged by the design/dev agency. Also thinking Google being cleverer now (the pages content will be about the target kw) so thinking G would work it out from rest of page content and partial match kw in url. In other words to best target keyphrase is it best to leave url as is or change url to include keyphrase ? For example if the pages target kw is 'swimming clubs' and the current url is www.franksleisurecentres.com/clubs changing it to www.franksleisurecentres.com/swimming-clubs :Thanks Dan
On-Page Optimization | | Dan-Lawrence0 -
Category page canonical tag
I know this question has been asked a few times on here but I'm looking for very specific advice. Currently when you go to a category, say http://www.bronterose.co.uk/range.html, a canonical tag is added to the head of the page. There are plenty of "variant" pages which carry the same tag, for example: /range.html?p=2
On-Page Optimization | | crichardson9
/range.html?p=3
/range.html?dir=asc&order=price
/range.html?dir=asc&limit=all&order=price Is it wise to push the "link juice" for each of these variant pages to the top level page? Or should each variant page have its own unique canonical tag? After reading many blog posts, guides and papers I'm truly confused! Any general guidance or recommendations would be much appreciated. Chris.1 -
Seomoz.org Category and Tags practice
Hello, I have been checking seomoz sourcecode and architecture these days in order to learn and to apply it in my site but I havent managed to find "tags" at all. Just the "Posts by Categories" on the right sidebar. Is this the only way you are categorising content? In this case, the only way spiders have to find your content is via these category archive pages and the general sitemap? Thanks!
On-Page Optimization | | antorome0