How to allow one directory in robots.txt
-
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked?
For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/.
I hope that makes sense...
Thanks!
-
Yes, you can set it up like this:
Disallow: /user/ Allow: /user/password/
And that should do it!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt allows wp-admin/admin-ajax.php
Hello, Mozzers!
Technical SEO | | AndyKubrin
I noticed something peculiar in the robots.txt used by one of my clients: Allow: /wp-admin/admin-ajax.php What would be the purpose of allowing a search engine to crawl this file?
Is it OK? Should I do something about it?
Everything else on /wp-admin/ is disallowed.
Thanks in advance for your help.
-AK:2 -
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Google Search console says 'sitemap is blocked by robots?
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt." I don't understand why my sitemap is being blocked? My robots.txt look like this: User-Agent: *
Technical SEO | | Extima-Christian
Disallow: Sitemap: http://www.website.com/sitemap_index.xml It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?1 -
No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
Hello, New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt) We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin) See image here of Google listing and site URL: http://imgur.com/46wajJw I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools. When can we expect these results to update? Is there a step I may have overlooked? Thank you,
Technical SEO | | adamhdrb
Adam 46wajJw0 -
SEO problems from moving from several pages to one accordian
Ive read other posts that say using accordion is not detrimental to SEO, and for conversion optimization we want to take several of our existing pages and make them into one accordion. But what will this do to seo and duplicate content as I redirect the old pages to anchors in the accordion? I would think this would be a dup content problem as www.oldinfo1 www.oldinfo2 will now have their content on the same page but I will be redirecting them to www.newpage#oldinfo1 www.newpage#oldinfo2 Is there a way around duplicate content problems?
Technical SEO | | JohnBerger0 -
Remove more than 1000 crawl errors from GWT in one day?
In google webmasters tools you have the feature "Crawl Errors". This one displays the top 1000 crawl errors google have on your site. I have around 16k crawl errors at the moment, which all are fixed. But i can only mark 1000 of them as fixed each day/each time google crawls the site. (This as it only displays top 1000 errors. When i have marked those as fixed it won't show other errors for a while.) Does anyone know if it's possible to mark ALL errors as fixed in one operation?
Technical SEO | | Host10 -
One page with multi syndicate feeds
Hi, Just a quick note. I hope someone is able to advise. To cut a long story short I have a page/s that receive multiple syndicated feeds.
Technical SEO | | daveupton
We are using the content for its value to visitors. Am very happy to cross domain rel=canonical the source and not incur any of Panda's wrath but would like to know if one can add multiple rel-canonicals to one page to reference multiple sources. Appreciate your help. Many thanks Dave Upton0 -
Robots.txt
should I add anything else besides User-Agent: * to my robots.txt file? http://melo4.melotec.com:4010/
Technical SEO | | Romancing0