figshare
Browse
Barba-NSFrepeto2017.pdf (2.31 MB)

Science Reproducibility Taxonomy

Download (2.31 MB)
presentation
posted on 2017-07-26, 20:52 authored by Lorena A. BarbaLorena A. Barba
Presentation slides for the 2017 Workshop on Reproducibility Taxonomies for Computing and Computational Science
July 25, 2017

For a nicer viewer, see slides on Speaker Deck at:

Slide notes:

[slide 2] Jon F. Claerbout.
First appearance of the term “reproducible research” in a scholarly publication: 1992 invited paper, Soc. of Exploration Geophysics.

—.
[slide 3]
Inspired by Claerbout, this is our adopted definition of Reproducible Research.It does rely on open-source code and open data, but that’s not all there is.

—.
[slide 8]
First article to publicly state that reproducibility depends on open code and data (AFAIK).
Define reproducible computational research as that "in which all details of computations—code and data—are made conveniently available to others."
Took inspiration from Claerbout, who proposed that in computational science "the actual scholarship is the complete software development environment and the complete set of instructions which generated the figures."

—.
[slide 9]
The Yale Roundtable resulted in a jointly-authored Data and Code Sharing Declaration. About 30 experts got together ... their fields: computer science, applied mathematics, law, biostatistics, information sciences, astronomy, biochemistry.
The paper expanded on the theme of transparency via open code and data. They defined reproducible computational research unambiguously as that making available all details (code and data) of the computations.

—.
[slide 10]
Subscribing to the recommendations of the Yale roundtable means we need to learn about software licensing and data management.
Among future goals, the Yale Roundtable recognized the importance of enabling citation of code and data, of developing tools to facilitate versioning, testing and tracking, and of standardizing various aspects like terminology, ownership, policy.

—.
[slide 11]
The Signal Processing Society of the IEEE adopted:
“…our reproducible research efforts… we give readers access to all the information (code, data, schemes, etc.) that was used to produce the presented results…” –citing Claerbout

—.
[slide 12]
Peng (2011) introduced the idea of a reproducibility spectrum. He says that reproducible research is a “minimum standard for judging scientific claims when full independent replication of a study is not possible.”
Here we find an explicit distinction in terminology, where full replication of a study involves collecting new data, with a different method (and code), and arriving at the same or equivalent final findings.
The standard of reproducibility calls for the data and the computer code used to analyze the data be made available to others.
... aim of the reproducibility standard is to fill the gap in the scientific evidence-generating process between full replication of a study and no replication
... a study may be more or less reproducible than another depending on what data and code are made available
... A critical barrier to reproducibility in many cases is that the computer code is no longer available.

—.
[slide 13]
We’ve adopted this definition of “Replication,” based on Peng (2011). A full replication study is sometimes impossible to do, but reproducible research is only limited by the time and effort we are willing to invest.

—.
[slide 14]
This paper summarizes a workshop held in Vancouver, July 2011. It mentions that at the workshop, “two sequential speakers provided opposite definitions for replicable and reproducible.
“We believe the first refers to the ability to run a code and produce exactly the same results as published, and the second refers to the ability to create a code that independently verifies the published results using the information provided.”
The authors associate Reproducible Research with changing the culture of scientific publishing.
They acknowledge, however, that releasing code and data is not always viable (proprietary or privacy concerns, e.g.). But this should be no impediment to conducting reproducible research:
“… we call upon all computational scientists to practice reproducibility, even if only privately and for the benefit of your current and future research efforts: use version control, write a narrative, automate your process, track your provenance, and test your code.
Then:
“… from private reproducibility it’s only a small effort
to achieve public reproducibility if circumstances warrant: simply release the code and data under a suitable license.”

—.
[slide 15]
The practice of private reproducibility” may improve the quality research and has the potential to increase the productivity of a team, long-term.

—.
[slide 16]
But … Science relies on presenting our findings for “independent testing and replication by others” (as declared by the APS in their Ethics and Values document of 1999).
“This requires the open exchange of data, procedures and materials.”
Genuine reproducible research is not only privately reproducible, but publicly so.
https://www.aps.org/templates/images/apslogo.gif

—.
[slide 17]
Quoting from this paper in PLOS Comp. Bio.
“…full replication studies on independently collected data is often not feasible … reproducible research [is] an attainable minimum standard"
So, what are those practices of reproducible research that help ensure quality of the process …

—.
[slide 18]
Common threads run through most of the recommendations. 1. recognizing that a final result is the product of a sequence of intermediate steps (the analysis workflow), a key device for reproducibility is automation; 2. the central technology for dealing with software as a living, changing thing, is version control; 3. archive and document everything with the best tools at hand.

—.
[slide 19]
…cites Claerbout, Vandewalle et al. (signal processing) and Spies et al. (psychology)
…does not make a distinction of terms, but deals with “reproducing” an existing work with artifacts provided by original authors

—.
[slide 20]
uses “replication studies” several times, “inability to replicate findings,” “replication in studies with different data”
—although no explicit terminology definition is included, the usage seems compatible with Peng’s.

—.
[slide 22]
The report also states that “Reproducibility is a minimum necessary condition for a finding to be believable and informative.”

—.
[slide 23]
TOP Guidelines—signed so far by nearly 3,000 journals and organizations—plainly link reproducibility to sharing of data, open code, research design disclosure, pre-registration of analysis plans and study details, and replication studies.

—.
[slide 25]
The IEEE/AIP jointly published “Computing in Science and Engineering” has included several influential works on reproducible research, starting by the 2000 paper by Claerbout et al.
This year, it announces a new dedicated magazine track on Reproducible Research.

—.
[slide 26]
The IEEE Computer Society Technical Consortium on High-Performane Computing is also launching a new initiative on Reproducibility (led by L Barba).
Its activities will include canvassing IEEE journal editors on the reproducibility concerns and efforts in their communities.

—.
[slide 27]
Finally, I want to share about The Journal of Open Source Software (JOSS, a new journal for research software.
JOSS is an academic journal with a formal peer review process that is designed to improve the quality of the software submitted.
Writing papers about software is currently the only sure way for authors to gain career credit as it creates a citable entity (a paper) that can be referenced by other authors. The primary purpose of a JOSS paper is to enable citation credit to be given to authors of research software.
JOSS has a rigorous peer review process and a first-class editorial board experienced at building and reviewing high-quality research software.
JOSS is an affiliate of the Open Source Initiative, and it is a fiscally sponsored project of NumFOCUS, a 501c3 nonprofit in the US—also home to NumPy, SciPy, Jupyter, Julia, Fenics and several other open-source projects for science.

I am both a member of the Editorial Board for JOSS, and a member of the Board of Directors of NumFOCUS.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC