Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do i block an entire category/directory with robots.txt?
-
Anyone has any idea how to block an entire product category, including all the products in that category using the robots.txt file? I'm using woocommerce in wordpress and i'd like to prevent bots from crawling every single one of products urls for now.
The confusing part right now is that i have several different url structures linking to every single one of my products for example www.mystore.com/all-products, www.mystore.com/product-category, etc etc.
I'm not really sure how i'd type it into the robots.txt file, or where to place the file.
any help would be appreciated thanks
-
Thanks for the detailed answer, i will give it a try!
-
Hi
This should do it, you place the robots.txt in the root directory of your site.
User-agent: * Disallow: /product-category/
You can check out some more examples here: http://www.seomoz.org/learn-seo/robotstxt
As for the multiple urls linking to the same pages, you will just need to check all possible variants and make sure you have them covered in the robots.txt file.
Google webmaster tools has a page where you can use to check if the robots.txt file is doing what you expect it to do (under Health -> Blocked Urls).
It might be easier to block the pages with a meta tag as described in the link above if you are running a plugin allowing this, that should take care of all the different url structures also.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best meta description for Category Pages, Tag Pages and Main Article?
Hi, I want to index all my categories and tags. But I fear about duplicating the meta description. for example: I have a tag name "Learn Stock Market", a category name "Learning", and a main article "What is Stock Market". What is your suggestion for meta description of these three pages that looks great for seo google?
On-Page Optimization | | mbmozmb0 -
Best schema option for condos / condominiums?
Hey guys, I'm doing a review on some schema on some of our sites. Most of them are generic using LocalBusiness. There are a few more specific schemas I could use, but not sure what would be the most relevant. Wondering if any of you have a suggestion or ideas? https://schema.org/Residence https://schema.org/LodgingBusiness https://schema.org/ApartmentComplex or I could just stick with LocalBusiness. I'm leaning towards LodgingBusiness or ApartmentComplex.... but when I think of LodgingBusiness I think of something temporary / vacation type deal like hotels. Apartments... kind of self explanatory, a condominium isn't exactly an apartment but perhaps it is more comparable to an apartment than a hotel, motel or inn. What are you thoughts on this? Also, which "format" is better to use RDFa, microdata, or JSON-LD. Does it matter?
On-Page Optimization | | donnieath0 -
Why are http and https pages showing different domain/page authorities?
My website www.aquatell.com was recently moved to the Shopify platform. We chose to use the http domain, because we didn't want to change too much, too quickly by moving to https. Only our shopping cart is using https protocol. We noticed however, that https versions of our non-cart pages were being indexed, so we created canonical tags to point the https version of a page to the http version. What's got me puzzled though, is when I use open site explorer to look at domain/page authority values, I get different scores for the http vs. https version. And the https version is always better. Example: http://www.aquatell.com DA = 21 and https://www.aquatell.com DA = 27. Can somebody please help me make sense of this? Thanks,
On-Page Optimization | | Aquatell1 -
How to exclude URL filter searches in robots.txt
When I look through my MOZ reports I can see it's included 'pages' which it shouldn't have included i.e. adding filtering rules such as this one http://www.mydomain.com/brands?color=364&manufacturer=505 How can I exclude all of these filters in the robots.txt? I think it'll be: Disallow: /*?color=$ Is that the correct syntax with the $ sign in it? Thanks!
On-Page Optimization | | neenor0 -
Solve duplicate content issues by using robots.txt
Hi, I have a primary website and beside that I also have some secondary websites with have same contents with primary website. This lead to duplicate content errors. Because of having many URL duplicate contents, so I want to use the robots.txt file to prevent google index the secondary websites to fix the duplicate content issue. Is it ok? Thank for any help!
On-Page Optimization | | JohnHuynh0 -
How to avoid keyword stuffing on e-Commerce Category pages
Hi, I'm optimizing a large, consumer electronic e-commerce superstore. Based on client's choice of keywords, I'm using product category pages as my target urls. Because of the proprietary CMS structure, product names and titles, featured on my landing pages (product category pages) create a keyword overkill, affecting various ranking factors. For example, one of the target urls / landing pages, dedicated to a specific product category, mentions the keyword over 190 times because of so many product titles in the "body" section. Would inline "rel="canonical" help? If yes, what part of the website should it "canonize"? If rel="canonical" is not the answer, what strategies would you suggest? Thanks!
On-Page Optimization | | dimanyc0 -
How do we handle sitemaps in robots.txt when multiple domains point to same physical location?
we have www.mysite.net, www.mysite.se, www.mysite.fi and so on. all of these domains point to the same physical location on our webserver, and we replace texts given back to client depending on which domain he/she requested. My problem is this: How do i configure sitemaps in robots.txt when robots.txt is used by multiple domains? If I for instance put the rows Sitemap: http://www.mysite.net/sitemapNet.xml
On-Page Optimization | | nordicnetproducts
Sitemap: http://www.mysite.net/sitemapSe.xml in robots.txt, would that result in some cross submission error?0 -
Title tag for category page
I'd like to know your views on the best approach for title tags for category pages for ecommerce sites. 3 examples A) Category name | Free delivery on $50 purchase | Brand name B) Discover best "category name" on brand name C) Category Name | 1st Keyword, 2nd keyword | Brand name Thanks!
On-Page Optimization | | walidalsaqqaf0