Decision-Making in Research Tasks with Sequential Testing

DSpace/Manakin Repository

Decision-Making in Research Tasks with Sequential Testing

Citable link to this page

. . . . . .

Title: Decision-Making in Research Tasks with Sequential Testing
Author: Pfeiffer, Thomas; Rand, David Gertler; Dreber-Almenberg, Anna

Note: Order does not necessarily reflect citation order of authors.

Citation: Pfeiffer, Thomas, David G. Rand, and Anna Dreber. 2009. Decision-Making in Research Tasks with Sequential Testing. PLoS ONE 4(2): e4607.
Full Text & Related Files:
Abstract: Background: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. Methodology/Principal Findings: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. Conclusions/Significance: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.
Published Version: doi://10.1371/journal.pone.0004607
Other Sources: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2643008/pdf/
Terms of Use: This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:4506434

Show full Dublin Core record

This item appears in the following Collection(s)

  • FAS Scholarly Articles [6948]
    Peer reviewed scholarly articles from the Faculty of Arts and Sciences of Harvard University
 
 

Search DASH


Advanced Search
 
 

Submitters