How to allow one directory in robots.txt
-
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked?
For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/.
I hope that makes sense...
Thanks!
-
Yes, you can set it up like this:
Disallow: /user/ Allow: /user/password/
And that should do it!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking pages from Moz and Alexa robots
Hello, We want to block all pages in this directory from Moz and Alexa robots - /slabinventory/search/ Here is an example page - https://www.msisurfaces.com/slabinventory/search/granite/giallo-fiesta/los-angeles-slabs/msi/ Let me know if this is a valid disallow for what I'm trying to. User-agent: ia_archiver
Technical SEO | | Pushm
Disallow: /slabinventory/search/* User-agent: rogerbot
Disallow: /slabinventory/search/* Thanks.0 -
Best way to handle 301 redirects on a business directory
We work with quite a few sites that promote retail traders and feature a traders' directory with pages for each of the shops (around 500 listings in most cases). As retail strips, shops come and go all the time, so I get a lot of pages that are removed as the business is no longer present. Currently I've been doing 301 redirects to the home page of the directory if you try to access a deleted trader page, but this means a ever growing htaccess file with thousands of 301 redirects. Are we handling this the best way or is there a better way to tackle this situation?
Technical SEO | | Assemblo0 -
Merging two sites into a new one: best way?
Hi, I have one small blog on a specific niche and let's call it firstsite.com (.com extension) and it's hosted on my server. I am going to takeover a second blog on same niche but with lots more links, posts, authority and traffic. But it his on a .info domain and let's call it secondsite.info and for now it's on a different server. I have a third domain .com where I would like join both blogs. Domain is better and reflects niche better and let's call it thirdsite.com How should I proceed to have the best result? I was thinking of creating a new account at my server with domain thirdsite.com After that upload all content from secondsite.info and go to google webmaster to let they know that site now sits on a new domain. Also do a full 301 redirect. Should it be page by page or just one 301 redirect? And finally insert posts (they are not many) from firstsite.com on thirdsite.com and do specific redirects. Is this a good option? Or should I first move secondsite.info to my server and keep updating it and only a few weeks later make transition to thirdsite.com? I am worried that it could be too much changes at once.
Technical SEO | | delta440 -
Black listed or not, struggling on this one.
I have a client who said they are black listed and they do not come up for any search query other than their name. I have done what I would expect to find the issues, like hurtful backlinks, poor coding etc however the code is fine, yes backlinks are a little slim. They have also said Penguin hit them hard last year. I am confused with this one as I have worked with clients who got hit by penguin and they improved but this particular client has not. http://www.specialistpaintsonline.co.uk is the website, and if anyone can shed some light as I may be missing something head on. regards
Technical SEO | | Shuffled0 -
I think google thinks i have two sites when i only have one
Hi, i am a bit puzzled, i have just used http://www.opensiteexplorer.org/anchors?site=in2town.co.uk to check my anchor text and forgot to put in the www. and the information came up totally different from when i put the www. in it shows a few links for the in2town.co.uk but then when i put in www.in2town.co.uk it gives me different information, is this a problem and if so how do i solve this | | | | | | | | |
Technical SEO | | ClaireH-184886
| | | | | | | | |0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
Is it possible to be penalized as duplicate content for one keyword but not another?
I help develop an online shopping cart and after a request from management about some products not showing up in the SERP's I was able to pinpoint it down to mostly a duplicate content issue. It's a no brainer as some times new products are inserted in with copied text from the manufacturers website. I recently though stumbled across a odd problem. When we partially re-wrote the content to seem unique enough it seemed to remedy the issue for some keywords and not others. A) If you search the company name our category listing shows as #1 ahead of the manufacturers website. We always did rank for this term. B) If you search the product name our product page is listed #3 behind two other listings which belong to the manufacturer. C) If you search the keywords together as "company product" we are still being filtered out as duplicate content. When I allow the filtered results to show we are ranking #4 It's been a full month since the changes were indexed. Before I rewrite the content even further I thought I would ask to see if any one has any insight as to what could be happening.
Technical SEO | | moondog6040 -
Robots exclusion
Hi All, I have an issue whereby print versions of my articles are being flagged up as "duplicate" content / page titles. In order to get around this, I feel that the easiest way is to just add them to my robots.txt document with a disallow. Here is my URL make up: Normal article: www.mysite.com/displayarticle=12345 Print version of my article www.mysite.com/displayarticle=12345&printversion=yes I know that having dynamic parameters in my URL is not best practise to say the least, but I'm stuck with this for the time being... My question is, how do I add just the print versions of articles to my robots file without disallowing articles too? Can I just add the parameter to the document like so? Disallow: &printversion=yes I also know that I can do add a meta noindex, nofollow tag into the head of my print versions, but I feel a robots.txt disallow will be somewhat easier... Many thanks in advance. Matt
Technical SEO | | Horizon0