Quantifying Uncertainty in Deep Learning
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationHolovko, Taras. 2020. Quantifying Uncertainty in Deep Learning. Bachelor's thesis, Harvard College.
AbstractDeep learning literature has witnessed an abundance of proposals for novel models of uncertainty in recent years. However, there has been comparatively little emphasis on the need for separate estimates for aleatoric and epistemic uncertainty, which are uniquely different types of uncertainties that arise from different sources and, as such, have different implications for real-life decision-making, especially in safety-critical contexts such as medical diagnosis.
In this thesis, we contribute to the literature a systematic and comparative evaluation of different metrics for quantifying aleatoric and epistemic uncertainty. In particular, we consider estimates for uncertainty that arise from traditional measures of variability in categorical distributions, decompositions of total uncertainty that are based upon classical statistical principles, and out-of-distribution detection metrics. We pair evaluation of all of these metrics with a variety of different models and inference methods that are rooted in both traditional and Bayesian deep learning.
We extend two separate decompositions of aleatoric and epistemic uncertainty to deep ensembles and statistical measures of variability in novel ways, and we evaluate both approaches as providing accurate estimates. Finally, we evaluate Monte Carlo dropout as an inference method applied to both homoscedastic and heteroscedastic regression models, and we find that it does not produce accurate aleatoric and epistemic uncertainty estimates as is suggested in the literature.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364712
- FAS Theses and Dissertations