These types of issues are pretty easy to detect and solve by simply checking your meta tags and robots.txt file, which is why you should look at it first. The whole website or certain pages can remain unseen by Google for a simple reason: its site crawlers are not allowed to enter them.
There are several bot commands, which will prevent page crawling. Note, that it’s not a mistake to have these parameters in robots.txt; used properly and accurately these parameters will help to save a crawl budget and give bots the exact direction they need to follow in order to crawl pages you want to be crawled.
You can detect this issue checking if your page’s code contains these directive:
<meta name="robots" content="noindex" />
<meta name="robots" content="nofollow">