Publication: Algorithms in Human Decision-Making: A Case Study With the COMPAS Risk Assessment Software
No Thumbnail Available
Open/View Files
Date
2019-08-23
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Vaccaro, Michelle Anna. 2019. Algorithms in Human Decision-Making: A Case Study With the COMPAS Risk Assessment Software. Bachelor's thesis, Harvard College.
Research Data
Abstract
This thesis uses the COMPAS algorithm as a case study to investigate the role of algorithmic risk assessments in human decision-making. The prior work on the COMPAS algorithm and similar risk assessment instruments focuses on the technical aspects of the tools by presenting methods to improve their accuracy and theorizing frameworks to evaluate the fairness of their predictions. The research does not consider the practical function of the algorithm as a decision-making aid rather than decision-maker.
The first experiment addresses the open question of if algorithmic risk scores impact human predictions of recidivism in a controlled environment with human subjects. The results indicate that the algorithmic risk scores act as anchors that induce a cognitive bias: participants assimilate their predictions to the algorithm’s score. In particular, participants who view the low anchor algorithm provide risk scores on average 42.3% lower than participants who view the high anchor algorithm when assessing the same set of defendants. Furthermore, participants can only sometimes perceive when the algorithm’s risk scores influence their decisions.
The follow-up experiment explores how the COMPAS algorithm specifically affects the accuracy and fairness of defendant risk assessments. When predicting the risk that a defendant will recidivate, the COMPAS algorithm achieves a significantly higher accuracy rate than the participants who assess defendant profiles (65.0% v 54.2%). Yet when participants incorporate the algorithm’s risk assessments into their decisions, their accuracy does not improve. In contrast, when participants view the COMPAS scores the fairness of their predictions changes according to measures of balanced error rates and accuracy equity. They produce more favorable outcomes for white defendants versus black defendants. The experiment also evaluates the effect of presenting an advisement designed to warn of this potential for disparate impact on minorities. The findings suggest, however, that the advisement does not significantly impact either the accuracy of recidivism predictions or their fairness according to measures of balanced error rates, predictive parity, and accuracy equity.
Based on the theoretical findings from the existing literature, some policymakers and software engineers contend that algorithmic risk assessments like the COMPAS software can alleviate the incarceration epidemic and the occurrence of violent crimes by informing and improving decisions about policing, treatment, and sentencing. But the results from this thesis indicate that if algorithmic risk assessments are to successfully address these problems, future research must investigate their practical role: an input to human decision-makers.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service