Fairness in Machine Learning: Methods for Correcting Black-Box Algorithms
Citation
Merchant, Amil. 2019. Fairness in Machine Learning: Methods for Correcting Black-Box Algorithms. Bachelor's thesis, Harvard College.Abstract
Machine learning is being used more frequently across a wide range of social domains. These algorithms are already trusted to make impactful decisions on topics including loan grades, personalized medicine, hiring, and policing. Unfortunately, many of these models have recently been criticized for discrimination against individuals of different races or sexes. This is particularly problematic from a legal perspective and has led to challenges over the use of these algorithms. In this thesis, we consider what would be needed to make a machine learning model fair according to the law. Special emphasis is placed on the COMPAS algorithm, a black-box machine learning model used for criminal recidivism prediction that has recently been shown to have a discriminatory impact for defendants of different races. We test two algorithmic methods in adversarial examples and adversarial networks that show significant progress in meeting the proposed legal requirements of fairness.Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364658
Collections
- FAS Theses and Dissertations [5435]
Contact administrator regarding this item (to report mistakes or request changes)