Hi Adam!
Thanks for writing in to Q&A! Let's dive right into this nitty gritty!
There are a lot of reasons you could be seeing these data discrepancies. If you're seeing more GWT errors, it could be because they have 'discovered links' through different means. Moz starts with your home page and uses recursive crawling to find pages on your site. We keep crawling until we stop finding unique links, or until we hit your page crawl limit. Bottom line, when we crawl your site your homepage is the only seed. We know that Google uses multiple seeds, so it could be possible Google is indexing more pages that way.
403 errors are agent specific. There's also a possibility that on a server level you're blocking WMT from crawling, but not rogerbot.
I looked into your account, and it looks like we're only crawling about 3,500 pages. Also, we generally limit our crawls to about 200 links per page. When I looked into your account, I saw that in most cases you had well beyond 200 internal links per page.
We have a few different options moving forward. To help you further I need examples of the following:
-specific URLs that show GWT errors, but don't show errors on Moz.
-a link hierarchy from that URL that goes back to your homepage.
I want to respect your privacy, so if you'd like to take this conversation offline, please email us at help@moz.com!
I hope this helps! Let me know if you have any other questions!
Have a great rest of your Thursday!
Erin