Show simple item record

dc.contributor.authorKomarov, Steven
dc.contributor.authorReinecke, Katharina
dc.contributor.authorGajos, Krzysztof Z
dc.date.accessioned2014-06-26T14:24:42Z
dc.date.issued2013
dc.identifier.citationKomarov, Steven, Katharina Reinecke, and Krzysztof Z. Gajos. 2013. “Crowdsourcing Performance Evaluations of User Interfaces.” In proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, April 27-May 2, 2013.en_US
dc.identifier.urihttp://nrs.harvard.edu/urn-3:HUL.InstRepos:12363924
dc.description.abstractOnline labor markets, such as Amazon's Mechanical Turk (MTurk), provide an attractive platform for conducting human subjects experiments because the relative ease of recruitment, low cost, and a diverse pool of potential participants enable larger-scale experimentation and faster experimental revision cycle compared to lab-based settings. However, because the experimenter gives up the direct control over the participants' environments and behavior, concerns about the quality of the data collected in online settings are pervasive. In this paper, we investigate the feasibility of conducting online performance evaluations of user interfaces with anonymous, unsupervised, paid participants recruited via MTurk. We implemented three performance experiments to re-evaluate three previously well-studied user interface designs. We conducted each experiment both in lab and online with participants recruited via MTurk. The analysis of our results did not yield any evidence of significant or substantial differences in the data collected in the two settings: All statistically significant differences detected in lab were also present on MTurk and the effect sizes were similar. In addition, there were no significant differences between the two settings in the raw task completion times, error rates, consistency, or the rates of utilization of the novel interaction mechanisms introduced in the experiments. These results suggest that MTurk may be a productive setting for conducting performance evaluations of user interfaces providing a complementary approach to existing methodologies.en_US
dc.description.sponsorshipEngineering and Applied Sciencesen_US
dc.language.isoen_USen_US
dc.publisherACM Pressen_US
dc.relation.isversionofdoi:10.1145/2470654.2470684en_US
dash.licenseOAP
dc.subjectCrowdsourcingen_US
dc.subjectMechanical Turken_US
dc.subjectUser Interface Evaluationen_US
dc.titleCrowdsourcing performance evaluations of user interfacesen_US
dc.typeConference Paperen_US
dc.description.versionAccepted Manuscripten_US
dc.relation.journalProceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13en_US
dash.depositing.authorGajos, Krzysztof Z
dc.date.available2014-06-26T14:24:42Z
dc.identifier.doi10.1145/2470654.2470684*
dash.contributor.affiliatedReinecke, Katharina
dash.contributor.affiliatedGajos, Krzysztof


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record