Publication: Risk Roulette: How Lawyers Make Pretrial Risk Assessment Tools Matter in Criminal Court
No Thumbnail Available
Open/View Files
Date
2024-05-31
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Robson, Cierra. 2024. Risk Roulette: How Lawyers Make Pretrial Risk Assessment Tools Matter in Criminal Court. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
Scholarly accounts of judicial decision making in the context of pretrial risk assessment tools often explore how judicial adherence to a risk assessment recommendation is shaped by individual-level characteristics like demographics and attitudes toward technology. In focusing on the individual, though, these accounts leave out the formal laws, policies, and procedures that necessarily shape how judges can act on such information. By analyzing ethnographic observations of pretrial hearings, in-depth interviews with defense attorneys and prosecutors, and archival data in New Jersey, this dissertation offers an examination of the ways in which institutional rules developed to accompany the adoption of the risk assessment tool shape judicial action by prompting various actors to develop alternative conceptions of risk that judges must, by law, consider alongside the statistical determinations presented by the risk assessment tool.
Chapter 1 describes the New Jersey context within which this project is situated. It details how the legislature came to develop the Criminal Justice Reform Act (CJRA), a landmark piece of legislation meant to correct an overwhelmingly unjust pretrial detention system. In an attempt to standardize pretrial detention decisions, the legislature require the use of a risk assessment tool called the Public Safety Assessment (PSA), an algorithm meant to calculate a defendant’s risk. At the same time that the law required the use of the algorithm, it also exhibited an inherent skepticism of the algorithm: rather than require that judges wholly adhere to the risk determinations of the tool, the law mandated that judges consider several alternative conceptions of risk developed by the legislature, prosecutors, and defense attorneys. By law, the judge is required to weigh each of these features to determine a decision. The term recommendation in this context, then, is made up of several competing pieces of information.
Chapter 2 explores two such risk determinations: the PSA as a statistically validated risk assessment tool, and the corresponding policy application called the Decision Making Framework (DMF). While the PSA produces risk scores, the DMF determines release recommendations. While previous literature has treated these concepts as synonymous, I show throughout this chapter that they are empirically distinct: while the PSA score is related to the DMF’s recommendation, the DMF also contains a list of crimes for which recommendations of “No Release” are required, regardless of the PSA score. Using data from over 560 cases, this chapter shows how judges combine information presented in the PSA with information presented in the DMF to come to a release recommendation. Rather than strictly adhere to the risk score or release recommendation, judicial behavior is patterned by discrepancies between the risk determinations presented by the PSA and those presented by the DMF.
Chapter 3 evaluates two other risk determinations: those created by defense attorneys and prosecutors presented through arguments. To substantiate these determinations of risk, lawyers deploy two argumentative strategies to challenge and uphold parts of the PSA and DMF. First, lawyers deploy substantive attacks on the algorithm’s variables, calculations, and accuracy for a given defendant. Next, lawyers use procedural attacks to contest the proper use of the algorithms, even if the calculations are technically correct. Unlike previous literature which has expressed that individuals hold uniform perceptions of algorithms, this chapter argues that lawyers in New Jersey variably interpret and deploy the same tool on a case by case basis to justify their own determinations of risk.
Chapter 4 shows how discrepancies between the PSA and the DMF combine with arguments from lawyers to produce predictable patterns of judicial behavior. Relying upon the rare instances in which lawyers do not deploy arguments, this chapter shows that judges make sense of the discrepancies between the PSA and the DMF through the arguments of lawyers. When lawyers fail to deploy certain argumentative strategies, the anticipated patterns of judicial behavior occur less often.
In all, the story of New Jersey represents an instance of what I term institutional algorithmic aversion, or an inherent skepticism of algorithms that is embedded in the laws, policies, and rules that shape judicial action. Despite their skepticism towards algorithms, institutions that exhibit algorithmic aversion still mandate the use of algorithms, but attempt to curb their authority over important decisions by mandating that decision-makers consider alternative conceptions of risk. While progressive on its surface, laws and policies that exhibit institutional algorithmic aversion grant actors proximate to the judge discretion over these risk determination practices. In so doing, they leave room for these actors to roll back progressive policy reforms.
Description
Other Available Sources
Keywords
Sociology, Criminology
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service