Faithful Saliency Maps: Explaining Neural Networks by Augmenting "Competition for Pixels"
Görns, Jorma Peer
MetadataShow full item record
CitationGörns, Jorma Peer. 2020. Faithful Saliency Maps: Explaining Neural Networks by Augmenting "Competition for Pixels". Bachelor's thesis, Harvard College.
AbstractFor certain machine-learning models such as image classifiers, saliency methods promise to answer a crucial question: At the pixel level, where does the model look to classify a given image? If existing methods truthfully answer this question, they can bring some level of interpretability to an area of machine learning where it has been inexcusably absent: namely, to image-classifying neural networks, usually considered some of the most "black-box" classifiers. A multitude of different saliency methods has been developed over the last few years---recently, however, Adebayo et al. revealed that many of them fail so-called "sanity checks": That is, these methods act as mere edge detectors of the input image, outputting the same convincing-looking saliency map completely independently of the model under investigation! Not only do they not illuminate the inner workings of the model at hand, but they may actually deceive the model investigator into believing that the model is working as it should. To fix these deceptive methods and save them from the trash pile of discarded research, Gupta and Arora proposed an algorithm called competition for pixels. Yet as we uncovered, competition can be deceiving itself! This thesis makes three main contributions: (1) It examines competition for pixels, showing that the algorithm has serious issues in the few-class setting. (2) It proposes an augmentation of the competition algorithm designed to address these issues. (3) It experimentally verifies the effectiveness of said augmentation.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364724
- FAS Theses and Dissertations