Robots.txt: how to exclude sub-directories correctly?
-
Hello here,
I am trying to figure out the correct way to tell SEs to crawls this:
http://www.mysite.com/directory/
But not this:
http://www.mysite.com/directory/sub-directory/
or this:
http://www.mysite.com/directory/sub-directory2/sub-directory/...
But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way:
disallow: /directory/sub-directory/
disallow: /directory/sub-directory2/
disallow: /directory/sub-directory/sub-directory/
disallow: /directory/sub-directory2/subdirectory/
etc...
I would end up having thousands of definitions to disallow all the possible sub-directory combinations.
So, is the following way a correct, better and shorter way to define what I want above:
allow: /directory/$
disallow: /directory/*
Would the above work?
Any thoughts are very welcome! Thank you in advance.
Best,
Fab.
-
I mentioned both. You add a meta robots to noindex and remove from the sitemap.
-
But google is still free to index a link/page even if it is not included in xml sitemap.
-
Install Yoast Wordpress SEO plugin and use that to restrict what is indexed and what is allowed in a sitemap.
-
I am using wordpress, Enfold theme (themeforest).
I want some files to be accessed by google, but those should not be indexed.
Here is an example: http://prntscr.com/h8918o
I have currently blocked some JS directories/files using robots.txt (check screenshot)
But due to this I am not able to pass Mobile Friendly Test on Google: http://prntscr.com/h8925z (check screenshot)
Is its possible to allow access, but use a tag like noindex in the robots.txt file. Or is there any other way out.
-
Yes, everything looks good, Webmaster Tools gave me the expected results with the following directives:
allow: /directory/$
disallow: /directory/*
Which allows this URL:
http://www.mysite.com/directory/
But doesn't allow the following one:
http://www.mysite.com/directory/sub-directory2/...
This page also gives an update similar to mine:
https://support.google.com/webmasters/answer/156449?hl=en
I think I am good! Thanks
-
Thank you Michael, it is my understanding then that my idea of doing this:
allow: /directory/$
disallow: /directory/*
Should work just fine. I will test it within Google Webmaster Tools, and let you know if any problems arise.
In the meantime if anyone else has more ideas about all this and can confirm me that would be great!
Thank you again.
-
I've always stuck to Disallow and followed -
"This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:"
http://www.robotstxt.org/robotstxt.html
From https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt this seems contradictory
|
/*
| equivalent to / | equivalent to / | Equivalent to "/" -- the trailing wildcard is ignored. |I think this post will be very useful for you - http://moz.com/community/q/allow-or-disallow-first-in-robots-txt
-
Thank you Michael,
Google and other SEs actually recognize the "allow:" command:
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt
The fact is: if I don't specify that, how can I be sure that the following single command:
disallow: /directory/*
Doesn't prevent SEs to spider the /directory/ index as I'd like to?
-
As long as you dont have directories somewhere in /* that you want indexed then I think that will work. There is no allow so you don't need the first line just
disallow: /directory/*
You can test out here- https://support.google.com/webmasters/answer/156449?rd=1
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it better to optimise for several keywords/keyword variations on one page, or create sub categories for those specific terms?
I've done a fair of research to try to find the answer to this, but different people seem to give very different opinions, and none of the info I could find is recent! I'm working with a company that produces a range of industrial products that fit into 6 main categories, within this categories, there are types of products and the products themselves. Prior to my involvement most of the content was added to the product pages and very little was added to the overall category page. The structure works like this: Electronic devices > type of device > products The 'type of device' category could be something like a switch, but within that category are 3/4 different switch types...leaving me with 11 or 12 primary keyword/phrases to aim for as each switch is searched for in more than one way. Should I try to rank for all of those terms using that one category page? Or should I change the structure to something like: Electronic devices > type of device > sub-category/specific variation of device > product This would mean creating a page for each variation to have a more accute focus for a small number of phrases..but it also means I've added another step between the home page and the products. Any advice is welcome! I'm worried I'm overthinking it!
Intermediate & Advanced SEO | | Adam_SEO_Learning0 -
If I 301 redirect a sub-page that is #1, will I risk losing SERP?
I have a site that for some reason Google decided to rank one of our articles #1 for a fairly competitive term. The article is kind of a BS blog post and I want to 301 it to our page about the topic as that's designed for conversion. If I do this, will we risk losing the ranking? If so, what are other options? Can I change the content of the ranked page to something closer to our landing page? Any advice is welcome!
Intermediate & Advanced SEO | | dk80 -
Search Results Pages Blocked in Robots.txt?
Hi I am reviewing our robots.txt file. I wondered if search results pages should be blocked from crawling? We currently have this in the file /searchterm* Is it a good thing for SEO?
Intermediate & Advanced SEO | | BeckyKey0 -
Hreflang doubt use correctly
Hello,I have a question, I want to know which option is best for implementing a multi languages. We have a client whose website will have English and Spanish languages, both languages have the same content but English we focus on the US and UK, and Spanish only for the country Spain, the question arises what is the correct nomenclature we use or would it be the best value.**Option 1:****Option 2:**Or any of the two options is correct What would be the correct ?. Another question, if a German user is in Spain, and do a search on (Google Spain), what will be the best option that should be implemented, / is-de / or single / de /, which one will position before ( provided that the statement I is correct). A greeting and thanks.
Intermediate & Advanced SEO | | omar-moscat0 -
A newbie to this..what is a good way to find local directories for your city or general directories that should be submitted to
A newbie to this forum...hope have put the question the right way What is a good way/source to find which directories are suitable for a business. How to identify directories which are more localised..
Intermediate & Advanced SEO | | grovermohit0 -
Does Disallowing a directory also tell search engines to unindex?
I have a bunch of duplicate pages/duplicate title issues because of Joomla's item/category/menu structures. I want to tell search engines not to crawl, and also to unindex anything in those directories in order to solve the duplicate issues. I thought of disallowing in robots.txt, but then I realized that might not remove the URLs if they've already been indexed. Please help me figure this out.
Intermediate & Advanced SEO | | Ocularis0 -
Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet. #Dynamic URLs Disallow: /?osCsidDisallow: /?q= Disallow: /?dir=Disallow: /?p= Disallow: /*?limit= Disallow: /*review-form I have concern for following kind of pages. Shorting by specification: http://www.vistastores.com/table-lamps?dir=asc&order=name Iterms per page: http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name Numbering page of products: http://www.vistastores.com/table-lamps?p=2 Will it create resistance in organic performance of my category pages?
Intermediate & Advanced SEO | | CommercePundit0 -
Could Sub domains damage our SEO?
Hi there, We're currently looking into integrating a new internal search function to our site which will involve housing the search results on a sub domain of our site. We have no intention of these search result pages becoming landing pages for organic traffic but would the inclusion of a sub domain affect the optimization of the main domain? i.e. could it effect our authority? Nige
Intermediate & Advanced SEO | | NigelJ0