Publication:
Fairness in Machine Learning: Methods for Correcting Black-Box Algorithms

No Thumbnail Available

Date

2019-10-25

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Merchant, Amil. 2019. Fairness in Machine Learning: Methods for Correcting Black-Box Algorithms. Bachelor's thesis, Harvard College.

Research Data

Abstract

Machine learning is being used more frequently across a wide range of social domains. These algorithms are already trusted to make impactful decisions on topics including loan grades, personalized medicine, hiring, and policing. Unfortunately, many of these models have recently been criticized for discrimination against individuals of different races or sexes. This is particularly problematic from a legal perspective and has led to challenges over the use of these algorithms. In this thesis, we consider what would be needed to make a machine learning model fair according to the law. Special emphasis is placed on the COMPAS algorithm, a black-box machine learning model used for criminal recidivism prediction that has recently been shown to have a discriminatory impact for defendants of different races. We test two algorithmic methods in adversarial examples and adversarial networks that show significant progress in meeting the proposed legal requirements of fairness.

Description

Other Available Sources

Keywords

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories