The Dangers of Algorithmic Autonomy: Efficient Machine Learning Models of Inductive Biases Combined With the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain
MetadataShow full item record
CitationHalder, Sumona. 2020. The Dangers of Algorithmic Autonomy: Efficient Machine Learning Models of Inductive Biases Combined With the Strengths of Program Synthesis (PBE) to Combat Implicit Biases of the Brain. Bachelor's thesis, Harvard College.
AbstractIn current research surrounding machine learning algorithms, we have faced a large ethical and moral issue. Our algorithms, which are intended to provide us with the most optimal and unbiased decision has started to emulate the implicit biases of humans. This has bled into our society in destructive ways as the widespread applications of these algorithms have been implemented in crucial decisions of our lives such as jobs, bank loans, and college admissions. However, due to lack of proper training data and the inherent nature of implicit bias, we have seen that reversing this in algorithms is quite challenging. Inductive bias on the other hand, offers some insight as how we can generalize from specific examples and maximize future predictions. While inductive reasoning is not immune to being affected by implicit bases, it can be used to properly train algorithms to produce better outcomes.
In this paper, we will explore a theoretical solution through a mechanism-first (building upon a foundation of cognitive processes) strategy. We will explore ways to better implement inductive reasoning to combat implicit bias. While a few solutions and strengths of larger training sets and reinforced learning along with inducive bias are explored, we make a case for a model that combines the strengths of machine learning and programming by examples to tackle such issues.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364669
- FAS Theses and Dissertations