Right for the Right Reasons: Training Neural Networks to be Interpretable, Robust, and Consistent with Expert Knowledge
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
Ross, Andrew Slavin
MetadataShow full item record
CitationRoss, Andrew Slavin. 2021. Right for the Right Reasons: Training Neural Networks to be Interpretable, Robust, and Consistent with Expert Knowledge. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
AbstractNeural networks are among the most accurate machine learning methods in use today. However, their opacity and fragility to distribution shifts make them difficult to trust in critical applications. Recent efforts to develop explanations for neural networks have produced tools to shed light on the implicit rules behind predictions. These tools can help us identify when networks are right for the wrong reasons, or equivalently that they will fail under distribution shifts that should not affect predictions. However, such explanations are not always at the right level of abstraction, and more importantly, cannot correct the problems they reveal. In this thesis, we explore methods for training neural networks to make predictions for better reasons, both by incorporating explanations into the training process and by learning representations that better match human concepts. These methods produce models that are more interpretable to users and more robust to distribution shifts.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37368260
- FAS Theses and Dissertations