Growth in a Time of Debt

Molecular Structure of Nucleic Acids: Structure Deoxyribose We describe a study into the extent to which Computer Systems researchers share their code and data and the extent to which such code builds. Starting with 601 papers from ACM conferences and journals, we examine 402 papers whose results were backed by code. For 32.3% of these papers we were able to obtain the code and build it within 30 minutes ; for 48.3% of the papers we managed to build the code, but it may have required extra effort; for 54.0% of the papers either we managed to build the code or the authors stated the code would build with reasonable effort.


How the Case for Austerity Has Crumbled
In April 2013, Herndon, Ash & Pollin showed that the statistical analyses performed on the data in the original Reinhart-Rogoff Excel spreadsheet (which were used to support the conclusions of the paper) were flawed.
Economist Paul Krugman (Swedish National Bank's Prize) later explained : "What the Reinhart-Rogoff affair shows is the extent to which austerity has been sold on false pretenses. For three years, the turn to austerity has been presented not as a choice but as a necessity." Article accessed online on March 28, 2017 No corrections, no warning, wrong results still online Cited by more than 2,000 papers

Software Problem Leads to Five Retractions
In September (2006), Swiss researchers published a paper in Nature that cast serious doubt on a protein structure Chang's group had described in a 2001 Science paper.
When he investigated, Chang was horrified to discover that a homemade data-analysis program had flipped two columns of data, inverting the electron-density map from which his team had derived the final protein structure.
"I've been devastated", Chang says. "I hope people will understand that it was a mistake, and I'm very sorry for it." I have heard from graduate students opting out of academia, assistant professors afraid to come up for tenure, mid-career people wondering how to protect their labs, and senior faculty retiring early, all because of methodological terrorism.

APS Observer (2016)
A second concern held by some is that a new class of research person will emerge -people who had nothing to do with the design and execution of the study but use another group's data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as research parasites The New England Journal of Medicine (2016) M et h od ol og ic al te r r or is m r es ea r ch pa r as it es

Reproducible computational neuroscience
Any model in Science is doomed to be proved wrong or incomplete and replaced by a more accurate one. In the meantime, for such replacement to happen, we have first to make sure that models are actually reproducible such that they can be tested, evaluated, criticized and ultimately modified, replaced or even rejected. This is where the shoe pinches.
If we cannot reproduce a model in the first place, we're doomed to re-invent the wheel again and again, preventing us from building an incremental computational knowledge.
My field of research is quite different from computational neuroscience, but I recognize the problem described in this paper very well. The core issue has in my opinion been identified in the comment by Jan Moren: there is no obvious way to publish complex scientific models other than as part of simulation software.

Rerunnable
Can you re-run your program ? One day, one week, one month, one year (just kidding) apart ?

Repeatable
Can you re-run your program and get same results ? Did you save everything, including random seed ?

Reproducible
Can someone re-run your program and get same results ? Did you save the software stack ?

Replicable
Can someone reimplement your model and get same results ? Did you describe everything ?

Reusable
Can someone reuse your program using different data ? Is your software data-dependent ? Re-

Replications in the wild
What is a replication?
Bob reads Alice's paper, takes note of all model properties and then implements the model himself using a method of his choice.
Bob confirms Alice's result by obtaining qualitatively the same results.
Alice's model has been replicated.

Who wants to write replication?
During the course of a PhD, it is often the case that a student will try to replicate results from the literature, possibly interacting with the original authors.
Such replication generally lives inside the hard-drive of the computer's student while it would be actually useful for the whole scientific community.

Who wants to review & publish such replication?
We do! I N T R O D U C I N G

The ReScience journal
ReScience is an open peer-reviewed journal that target any computational research and encourage the explicit replication of already published research promoting new and open-source implementations.
ReScience lives on github where each new implementation is made available together with explanations (article).
Each published article is archived on Zenodo.
Zenodo is a research data repository created by OpenAIRE and CERN to provide a place for researchers to deposit datasets.
GitHub is a web-based Git repository hosting service that offers all of the distributed version control and source code management functionality of git as well as adding its own features.
We redo Science !

ReScience
Reproducible science is good. Replicated science is better. Some research may not be replicable. Before declaring a research result non-replicable, we require extra caution to be taken. In addition to scrutiny of your submission by reviewers and editors, we will contact the authors of the original research, and issue a challenge to the ReScience community to spot and report (using the issue tracker) errors in your implementation.
If no errors are found, your work will be accepted and the original research will be declared non-replicable.
What about replication of my own work?
No. Mistakes in the implementation of research questions and methods are often due to biases authors invariably have, consciously or not. One's biases will inevitably carry over to how one approaches a replication.
Perhaps even more importantly, we aim at the cross-fertilization of research and trying to replicate the work of one's peers might pave the way for a future collaboration, or may give rise to new ideas as a result of the replication effort.
What kind of research can I replicate?
Any computational research in any domain of science as long as there is an editor from the Board who has the expertise to edit your submission. The editorial board is growing to increase the scientific domains being covered. If no editor is able to edit your submission, you can also propose a guest editor (who must be willing to work with our GitHub-based editorial processes). about replication of my own work?
I'm a student, can I submit?
Yes ! Students are strongly encouraged to submit their work. Although the ReScience publishing model is a bit different from other academic journals, it can give students a first experience at peer-reviewed scholarly publishing, including meeting standards of scientific rigor and addressing reviewers' comments. Publishing in ReScience is also a way to actively contribute to open science while adding to one's publication record.