Tracking Replicability As a Method of Post-Publication Open Evaluation

DSpace/Manakin Repository

Tracking Replicability As a Method of Post-Publication Open Evaluation

Show simple item record

dc.contributor.author Hartshorne, Joshua Keiles
dc.contributor.author Schachner, Adena Michelle
dc.date.accessioned 2012-03-13T14:36:04Z
dc.date.issued 2012
dc.identifier.citation Hartshorne, Joshua K. and Adena Schachner. 2012. Tracking replicability as a method of post-publication open evaluation. Frontiers in Computational Neuroscience 6:8. en_US
dc.identifier.issn 1662-5188 en_US
dc.identifier.uri http://nrs.harvard.edu/urn-3:HUL.InstRepos:8355188
dc.description.abstract Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such data- base is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd- sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate. en_US
dc.description.sponsorship Psychology en_US
dc.language.iso en_US en_US
dc.publisher Frontiers Research Foundation en_US
dc.relation.isversionof doi:10.3389/fncom.2012.00008 en_US
dash.license LAA
dc.subject replication en_US
dc.subject replicability en_US
dc.subject post-publication evaluation en_US
dc.subject open evaluation en_US
dc.title Tracking Replicability As a Method of Post-Publication Open Evaluation en_US
dc.type Journal Article en_US
dc.description.version Version of Record en_US
dc.relation.journal Frontiers in Computational Nuerosceince en_US
dash.depositing.author Hartshorne, Joshua Keiles
dc.date.available 2012-03-13T14:36:04Z

Files in this item

Files Size Format View
hartshome_frontier_comp_neuro.pdf 1.246Mb PDF View/Open

This item appears in the following Collection(s)

  • FAS Scholarly Articles [7374]
    Peer reviewed scholarly articles from the Faculty of Arts and Sciences of Harvard University

Show simple item record

 
 

Search DASH


Advanced Search
 
 

Submitters