Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
site.com/blog/category/milk
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.