Block all but one URL in a directory using robots.txt?
-
Is it possible to block all but one URL with robots.txt?
for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
-
Robots.txt files are sequential, which means they follow directives in the order they appear. So if two directives conflict, they will follow the last one.
So the simple way to do this is to disallow all files first, then allow the directory you want next. It would look something like this:
User-agent: *
Disallow: /User-agent: *
Allow: /testCaveat: This is NOT the way robots.txt is supposed to work. By design, robots.txt is designed for disallowing, and technically you shouldn't ever have to use it for allowing. That said, this should work pretty well.
You can check your work in Google Webmaster, which has a robots.txt checker. Site Configuration > Crawler Access. Just type in your proposed robots.txt, then a test URL and you should be good to go.
Hope this helps!
-
According to my knowledge this possibility does not exist. One fast method to get over this is to get a crawler program to crawl your urls, so that you can quickly copy out all url in the folder paste in in the robots.txt and leave aout the one that you want in the index.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone please explain the real difference between backlinks, 301 links, and redirect links?which one is better to rank a website? i am looking for the help for one of my website
Can anyone please explain the real difference between backlinks, 301 links, and redirect links? which one is better to rank a website? I am looking for help for one of my website vacuum cleaners
Intermediate & Advanced SEO | | hshajjajsjsj3880 -
No content using Fetch
Wooah, this one makes me feel a bit nervous. The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that. If I Google some of the content it is there in the index and the cache version is yesterday. If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems? The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow"> class="data"> some content... Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
Intermediate & Advanced SEO | | MickEdwards0 -
Best practice for disallowing URLS with Robots.txt
Hi Everybody, We are currently trying to tidy up the crawling errors which are appearing when we crawl the site. On first viewing, we were very worried to say the least:17000+. But after looking closer at the report, we found the majority of these errors were being caused by bad URLs featuring: Currency - For example: "directory/currency/switch/currency/GBP/uenc/aHR0cDovL2NlbnR1cnlzYWZldHkuY29tL3dvcmt3ZWFyP3ByaWNlPTUwLSZzdGFuZGFyZHM9NzEx/" Color - For example: ?color=91 Price - For example: "?price=650-700" Order - For example: ?dir=desc&order=most_popular Page - For example: "?p=1&standards=704" Login - For example: "customer/account/login/referer/aHR0cDovL2NlbnR1cnlzYWZldHkuY29tL2NhdGFsb2cvcHJvZHVjdC92aWV3L2lkLzQ1ODczLyNyZXZpZXctZm9ybQ,,/" My question now is as a novice of working with Robots.txt, what would be the best practice for disallowing URLs featuring these from being crawled? Any advice would be appreciated!
Intermediate & Advanced SEO | | centurysafety0 -
Robots txt is case senstive? Pls suggest
Hi i have seen few urls in the html improvements duplicate titles Can i disable one of the below url in the robots.txt? /store/Solar-Home-UPS-1KV-System/75652
Intermediate & Advanced SEO | | Rahim119
/store/solar-home-ups-1kv-system/75652 if i disable this Disallow: /store/Solar-Home-UPS-1KV-System/75652 will the Search engines scan this /store/solar-home-ups-1kv-system/75652 im little confused with case senstive.. Pls suggest go ahead or not in the robots.txt0 -
What is the benefit of directory pages?
I recently started at a new job running ecommerce websites. We sell yoga equipment and on 2 of our sites we built directory pages for yoga studios to list their calendars and whatnot. They are pretty old and out of date, but my question is, is there any benefit to these types of directories? If they do, we need to look at refreshing them. But if not, then they need to go. One of them is here. http://www.everythingyoga.com/studios.aspx Like I said, it is out of date.
Intermediate & Advanced SEO | | ShockoeCommerce0 -
Robots.txt help
Hi Moz Community, Google is indexing some developer pages from a previous website where I currently work: ddcblog.dev.examplewebsite.com/categories/sub-categories Was wondering how I include these in a robots.txt file so they no longer appear on Google. Can I do it under our homepage GWT account or do I have to have a separate account set up for these URL types? As always, your expertise is greatly appreciated, -Reed
Intermediate & Advanced SEO | | IceIcebaby0 -
Urls missing from product_cat sitemap
I'm using Yoast SEO plugin to generate XML sitemaps on my e-commerce site (woocommerce). I recently changed the category structure and now only 25 of about 75 product categories are included. Is there a way to manually include urls or what is the best way to have them all indexed in the sitemap?
Intermediate & Advanced SEO | | kisen0 -
Tracking URLS and Redirects
We have a client with many archived newsletters links that contain tracking code at the end of the URL. These old URLs are pointing to pages that don't exist anymore. Is there a way to set up permanent redirects for these old URLs with tracking code? We have tried and it doesn't seem to work. Thank you!
Intermediate & Advanced SEO | | BopDesign0