Two good suggestions so far, and both I had checked. Thanks KJ Rogers and Ryan Kent.
This is starting to look like it boils down to how much the new SEOmoz crawler sees content in the same way that Google does.
We did not make any site-wide changes and the URLs identified as duplicate in the report are valid URLs that actually hold similar content (keywords and so forth were changed for each version of a slightly different product through an Excel Concatenate construct to build the content). We have actually seen these pages climb in rank over the months since the content was added.
So, like I said, the sudden identification of these as duplicate by the moz crawler is suspicious to me. Not sure it sees things the way Google does.