Making Peer Prediction Practical
Citation
Shnayder, Victor. 2016. Making Peer Prediction Practical. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.Abstract
My dissertation is on crowdsourcing---using crowds of people to accomplish tasks that are impractical or far more expensive otherwise. I focus specifically on crowdsourcing of information, where workers do tasks such as analyze images, translate sentences, report whether a cafe has public wi-fi, or assess the writing quality of essays. To encourage participation, workers can be paid, or given non-monetary rewards. In many applications, it is difficult to assess whether responses from a large crowd are accurate, and this can tempt workers into submitting nonsense, allowing them to complete tasks faster and get higher rewards. There are a number of ways to detect this and encourage worker effort and accurate reporting; I apply a technique called peer prediction, which rewards workers based on patterns of agreement among their reports.I am particularly motivated by the challenge of providing education at scale: how to enable billions of people to learn what they want, at a cost even the very poor can afford. Specifically, I study peer assessment of open-ended assignments as a way to scale human feedback. I treat this as a crowdsourcing problem, and study how peer prediction can encourage effort and accurate assessment when students give feedback to their peers.
Previous work in peer prediction has highlighted the need for reward mechanisms where exerting effort and reporting truthfully is better for workers than other reporting strategies. I make three main contributions: I present a new Correlated Agreement mechanism for peer prediction in multi-signal environments, that guarantees that uninformed reporting is less attractive than being truthful. I show that replicator dynamics is a useful tool to analyze the likelihood of truthful behavior and its stability when workers are not assumed to be fully rational, and learn from experience instead. Finally, I analyze a dataset of three million peer assessments from online courses on the edX platform, studying several challenges for using peer prediction for peer assessment in education: reward variability, reward magnitude, and low-effort reporting. I compare several peer prediction mechanisms, and conclude that peer prediction is a promising technique in this domain when combined with other efforts to improve feedback quality.
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:33840648
Collections
- FAS Theses and Dissertations [6136]
Contact administrator regarding this item (to report mistakes or request changes)