Detecting Deepfakes: Philosophical Implications of a Technological Threat
Kerner, Catherine M.
MetadataShow full item record
CitationKerner, Catherine M. 2020. Detecting Deepfakes: Philosophical Implications of a Technological Threat. Bachelor's thesis, Harvard College.
AbstractThis thesis offers a dual perspective on the threat of deepfake video forgeries, combining the disciplines of computer science and philosophy to address a technologically-enabled threat that is specifically designed to manipulate human understanding of reality. There are two main philosophical contributions. The first is a novel formulation of deepfakes as a dual source of epistemic injustice, the harm done to the viewer as a knower and to a misrepresented subject as someone who is known. The second is a characterization of the unique phenomenological danger of deepfakes - humans interpret the world through first-person experiences - which introduces another dimension of the deepfake threat, separate from the epistemic. Computer scientists have used machine learning image recognition networks to build deepfake detectors. However, in practice, a recognized concern for these detectors is the extent to which they generalize. As neural network classifiers, they learn to identify what a deepfake looks like on the basis of the deepfakes samples in their training datasets and may not recognize deepfakes in the wild that are not in this distribution. Robustness, the ability of a classifier to handle challenging examples, is of real importance for the context of deepfake detection, which is adversarial by nature. I run experiments to evaluate detector robustness to unseen distributions, using three detection networks and the two most recent deepfake datasets, Celeb-DF V2 and Deepfake Detection Challenge (DFDC), which are largely unrepresented in the detectors’ training data. Qualitatively, the two datasets introduce two categories of difficult samples: new manipulation methods (ways of generating deepfakes) and higher visual convincingness of forgeries. Results indicate that generalizing to new manipulation methods is harder than generalizing to deepfakes with higher realism. Ultimately, even though detector robustness remains a critical technical challenge, learned classifiers offer a solution only to the epistemic danger of deepfakes. They have no bearing on the phenomenological dimension. That is, the technologically-enabled deepfake threat can only be partially addressed by technological solutions.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364758
- FAS Theses and Dissertations