There several way to manage this problem, and all depends of the source of the problem. So let me explain my point and you can see which solution is the right for your case.
Causes for duplicate content
1. URL variations
example:
www.widgets.com/blue-widgets?color=blue is a duplicate of www.widgets.com/blue-widgets
2. HTTP vs. HTTPS or WWW vs. non-WWW pages
If your site has separate versions at "www.site.com" and "site.com" (with and without the "www" prefix), and the same content lives at both versions, you've effectively created duplicates of each of those pages. The same applies to sites that maintain versions at both http:// and https://. If both versions of a page are live and visible to search engines, you may run into a duplicate content issue.
3. Scraped or copied content
Content includes not only blog posts or editorial content, but also product information pages. Scrapers republishing your blog content on their own sites may be a more familiar source of duplicate content, but there's a common problem for e-commerce sites, as well: product information. If many different websites sell the same items, and they all use the manufacturer's descriptions of those items, identical content winds up in multiple locations across the web.
How to fix duplicate content issues
Fixing duplicate content issues all comes down to the same central idea: specifying which of the duplicates is the "correct" one.
Whenever content on a site can be found at multiple URLs, it should be canonicalized for search engines. Let's go over the three main ways to do this: Using a 301 redirect to the correct URL, the rel=canonical attribute, or using the parameter handling tool in Google Search Console.
SOLUTIONS
301 redirect
In many cases, the best way to combat duplicate content is to set up a 301 redirect from the "duplicate" page to the original content page.
Rel="canonical"
Another option for dealing with duplicate content is to use the rel=canonical attribute. This tells search engines that a given page should be treated as though it were a copy of a specified URL, and all of the links, content metrics, and "ranking power" that search engines apply to this page should actually be credited to the specified URL.
Meta Robots Noindex
One meta tag that can be particularly useful in dealing with duplicate content is meta robots, when used with the values "noindex, follow." Commonly called Meta Noindex,Follow and technically known as content=”noindex,follow” this meta robots tag can be added to the HTML head of each individual page that should be excluded from a search engine's index.
**IN SUMMARY **
There are many option to deal with this problem, if you are running wordpress you can use the SEO yoast plugin to mark as a NO INDEX the pages that you want to exclude another alternative can be use some redirect plugin and redirect the wrong pages to the right pages or even you can use your htaccess file to modify your redireccions. As you mention much of the problem might be caused for dummy content pages so the firts step could be identify those pages, create a list and then you can redirect or exclude those pages. Make sure that you have been properly the site settings of your site www and non www version
for more information you check this links
https://www.shoutmeloud.com/wordpress-duplicate-content-problems-fixes.html
https://moz.com/learn/seo/duplicate-content
https://yoast.com/duplicate-content/