Publication:
Statistical Perspectives on Algorithmic Fairness: Quantifying Group Fairness in Thresholding Decisions

No Thumbnail Available

Date

2024-06-12

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Li, Angela Yilin. 2024. Statistical Perspectives on Algorithmic Fairness: Quantifying Group Fairness in Thresholding Decisions. Bachelor's thesis, Harvard University Engineering and Applied Sciences.

Research Data

Abstract

Machine learning algorithms have become increasingly entrusted with consequential, high-impact decisions over the past few decades; however, numerous examples of the unfairness of their outcomes spawned the creation of the research field of algorithmic fairness over the past decade. Most of the work in algorithmic fairness has primarily focused on rigorously defining fairness as it relates to machine learning procedures and outcomes, and proposing a robust set of methods for correcting for unfairness; however, there still remains a gap in rigorously identifying and quantifying the extent of unfairness in a statistical sense. In this thesis, we provide novel derivations for the distributions of five of the most fundamental group fairness metrics—accuracy, acceptance rate, false positive rate, false negative rate, and positive predictive value—and the distributions of their differences across protected groups. These ultimately serve as the bases for the construction of confidence intervals, which provide a principled framework for rigorously assessing uncertainty and the extent of unfairness with respect to the disparity of group fairness quantities between protected groups. We hope these statistical tools contribute new perspectives and understanding to this highly multidisciplinary field of algorithmic fairness.

Description

Other Available Sources

Keywords

Statistics

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories