Publication: Machine Learning for Humans: Building Models that Adapt to Behavior
No Thumbnail Available
Open/View Files
Date
2021-09-09
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Hilgard, Anna Sophia. 2021. Machine Learning for Humans: Building Models that Adapt to Behavior. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
As machine learning continues to exhibit remarkable performance across a wide range of experimental tasks, there is an increasing enthusiasm to deploy these models in the real world. However, the traditional supervised learning framework optimizes performance without consideration to the use of these models by humans. In nearly all applications, human interaction affects the generation of input data, outcomes, or both. For example, a doctor may choose to either incorporate or override a machine-generated medical risk score. This judgment influences outcomes and invalidates the predicted level of performance in isolation. In the case of movie recommendation, digital records of human viewing behavior are guided by a recommendation engine, such that the distribution of input data is a function of the recommender itself. Human behavior is dynamic and responsive, and failing to account for this leads to suboptimal and even harmful results when machine learning models trained in isolation begin to interact with human stakeholders.
In this thesis, I consider humans in three different roles relative to the machine learning system: humans as model users, humans as model subjects, and humans as model auditors. For the first two configurations, I develop new frameworks that are capable of considering and adapting to relevant human behavior. For the last configuration, I reveal an important vulnerability in popular tools intended to assist human auditors. Specifically, when humans are model users, I design a new model architecture and training procedure that allows machine learning decision aids to directly adapt to how humans use them, optimizing for performance of the entire machine-human pipeline rather than solely machine accuracy. This system is validated in experiments with real human users, confirming its ability to adapt productively to different human behaviors. For humans as model subjects, I introduce a new form of model regularization that considers the motivations of to adopt new behaviors when regarding predictive models as accurate proxies for causal phenomena. This look-ahead regularizer balances model accuracy against ensuring that behavior change motivated in users results in positive outcomes with high probability. Finally, I construct an adversarial model capable of causing popular explainability tools to lead human auditors to incorrect inferences about model behavior. I show that on a variety of real world datasets, predictive models can exhibit discriminatory behavior (e.g. racial or gender disparity of outcomes) while passing proposed tests for such behavior.
Description
Other Available Sources
Keywords
Computer science
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service