Publication:
Detecting Deepfakes: Philosophical Implications of a Technological Threat

No Thumbnail Available

Date

2020-06-17

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Kerner, Catherine M. 2020. Detecting Deepfakes: Philosophical Implications of a Technological Threat. Bachelor's thesis, Harvard College.

Research Data

Abstract

This thesis offers a dual perspective on the threat of deepfake video forgeries, combining the disciplines of computer science and philosophy to address a technologically-enabled threat that is specifically designed to manipulate human understanding of reality. There are two main philosophical contributions. The first is a novel formulation of deepfakes as a dual source of epistemic injustice, the harm done to the viewer as a knower and to a misrepresented subject as someone who is known. The second is a characterization of the unique phenomenological danger of deepfakes - humans interpret the world through first-person experiences - which introduces another dimension of the deepfake threat, separate from the epistemic. Computer scientists have used machine learning image recognition networks to build deepfake detectors. However, in practice, a recognized concern for these detectors is the extent to which they generalize. As neural network classifiers, they learn to identify what a deepfake looks like on the basis of the deepfakes samples in their training datasets and may not recognize deepfakes in the wild that are not in this distribution. Robustness, the ability of a classifier to handle challenging examples, is of real importance for the context of deepfake detection, which is adversarial by nature. I run experiments to evaluate detector robustness to unseen distributions, using three detection networks and the two most recent deepfake datasets, Celeb-DF V2 and Deepfake Detection Challenge (DFDC), which are largely unrepresented in the detectors’ training data. Qualitatively, the two datasets introduce two categories of difficult samples: new manipulation methods (ways of generating deepfakes) and higher visual convincingness of forgeries. Results indicate that generalizing to new manipulation methods is harder than generalizing to deepfakes with higher realism. Ultimately, even though detector robustness remains a critical technical challenge, learned classifiers offer a solution only to the epistemic danger of deepfakes. They have no bearing on the phenomenological dimension. That is, the technologically-enabled deepfake threat can only be partially addressed by technological solutions.

Description

Other Available Sources

Keywords

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories