Block parent folder in robot.txt, but not children
-
Example:
I want to block this URL (which shows up in Webmaster Tools as an error):
http://www.siteurl.com/news/events-calendar/usa
But not this:
-
The idea from Andrew is nice, but my guess would be that you're targeting multiple events so that might run into issues. What you could do is add some more regular expression and make it like this:
Disallow: ^/news/events-calendar/usa$
-
You could use "allow" in your robots.txt file for just this problem.
allow: news/events-calendar/usa/event-name
disallow: /news/events-calendar/usa
See the allow directive section of this page: https://en.wikipedia.org/wiki/Robots_exclusion_standard
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Location for Copy Block
We are having discussions around the appropriate location to place the SEO copy block on an eCommerce category page. Would like to get the communities opinion to share with the creative team.
Web Design | | TukTown0 -
Robots.txt being blocked
I think there is an issue with this website I'm working on here is the URL: http://brownieairservice.com/ In Google Webmaster tools I am seeing this in the Robots.txt tester: User-agent: *
Web Design | | SOM24
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/ Also when I look at "blocked resources" in the webmaster tools this is showing to be blocked: http://brownieairservice.com/wp-content/plugins/contact-form-7/includes/js/jquery.form.min.js?ver=3.51.0-2014.06.20It looks like the form plug in is giving the issues but I don't understand this. There are no site errors or URL errors so I don't understand what this crawl delay means or how to fix it. Any input would be greatly appreciated. Thank you0 -
Fixing Render Blocking Javascript and CSS in the Above-the-fold content
We don't have a responsive design site yet, and our mobile site is built through Dudamobile. I know it's not the best, but I'm trying to do whatever we can until we get around to redesigning it. Is there anything I can do about the following Page Speed Insight errors or are they just a function of using Dudamobile? Eliminate render-blocking JavaScript and CSS in above-the-fold content Your page has 3 blocking script resources and 5 blocking CSS resources. This causes a delay in rendering your page.None of the above-the-fold content on your page could be rendered without waiting for the following resources to load. Try to defer or asynchronously load blocking resources, or inline the critical portions of those resources directly in the HTML.Remove render-blocking JavaScript: http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js http://mobile.dudamobile.com/…ckage.min.js?version=2015-04-02T13:36:04 http://mobile.dudamobile.com/…pts/blogs.js?version=2015-04-02T13:36:04 Optimize CSS Delivery of the following: http://fonts.googleapis.com/…:400|Great+Vibes|Signika:400,300,600,700 http://mobile.dudamobile.com/…ont-pack.css?version=2015-04-02T13:36:04 http://mobile.dudamobile.com/…kage.min.css?version=2015-04-02T13:36:04 http://irp-cdn.multiscreensite.com/kempruge/files/kempruge_0.min.css?v=6 http://irp-cdn.multiscreensite.com/…mpruge/files/kempruge_home_0.min.css?v=6 Thanks for any tips, Ruben
Web Design | | KempRugeLawGroup0 -
Robots.txt - Allow and Disallow. Can they be the same?
Hi All, I need some help on the following: Are the following commands the same? User-agent: * Disallow: or User-agent: * Allow: / I'm a bit confused. I take it that the first one allows all the bots but the second one blocks all the bots. Is that correct? Many thanks, Aidan
Web Design | | Presenter0 -
Is it too late to change an IP from the linking c-block?
My main web development company is linked to many of our clients and our clients link back to us using footer links back. We obviously have a high volume of c-block relations. If I change my main site's location to a different server will it make any difference or is it too late?
Web Design | | sanchez19600 -
Making a third-party hosted blog look like a folder on the main domain
I have a client that has a "completely pristine" Microsoft.net web environment that is unwilling to put a wordpress installation on their server. Their management team wants a wordpress blog for the marketing department. Is there a means where we can host the wordpress blog with a regular hosting company but yet have it appear as part of the main site e.g., mainsite.com/blog vs. having to put it in a subdomain (blog.mainsite.com) and lose all the SEO benefits of the blog content?
Web Design | | jtroia0 -
Should /dev folder be blocked?
I have been experiencing a ranking drop every two months, so I came upon a new theory this morning... Does Google do a deep crawl of your site say every 60-90 days and would they penalize a site if they crawled into your /dev area which would contain pretty the exact same urls and content as your production environment and therefore penalize you for duplicate content? The only issue I see with this theory is that I have been penalized only for specific keywords on specific pages, not necessarily across the board. Thoughts? What would be the best way to block out your /dev area?
Web Design | | BoulderJoe0 -
Correct use for Robots.txt
I'm in the process of building a website and am experimenting with some new pages. I don't want search engines to begin crawling the site yet. I would like to add the Robot.txt on my pages that I don't want them to crawl. If I do this, can I remove it later and get them to crawl those pages?
Web Design | | EricVallee340