Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.


What does peer review mean when applied to computer code?

A new experiment by the Mozilla Science Lab seeks to explore the interface between software engineers’ code review and the peer review of scientific articles that include code.

At a time when the use of computational methods in research is becoming ever more widespread and necessary, increasing numbers of research articles include small or large pieces of code used to manipulate the data. Some of the people producing this code are trained computational scientists, while others are biologists who have picked up coding ‘along the way’. At present, some journals (including PLOS Computational Biology) publish ‘Software’ articles, whose main aim is to make available a useful piece of software, and in these articles the software is carefully peer-reviewed. But in the majority of articles that include some smaller piece of code which is not the focus of the research, the peer review of the code may be cursory at best. We don’t know in any formal way whether this causes any problems down the line, when others try to replicate or build on published work. Anecdotally, there are sometimes difficulties in building on published code (even when it is fully available). So, should there be more formal review of code? And if so, how should it be approached?

In an experiment beginning this month, the Mozilla Science Lab will perform a trial series of software reviews. Over the month of August, a set of volunteer Mozilla engineers will review snippets of code from previously published PLOS Computational Biology papers, treating it as they would any other piece of code in their professional life. The Science Lab will be approaching authors of the papers after the initial review is complete, to offer them the opportunity to participate in a a dialogue about the process.The reviews will hopefully show:

Image credit: Marissa Anderson (flickr)
Image credit: Marissa Anderson (flickr)

  • How much scientific software can be reviewed by non-specialists, and how often is domain expertise required?
  • How much effort does this take compared to reviews of other kinds of software, and to reviews of papers themselves?
  • How useful do scientists find these reviews?

The Science Lab will publish the results of this experiment in an anonymous summary form, the reviews not affecting the status of the publication. We encourage authors to make use of the reviews they receive, perhaps using our post-publication commenting feature, but for this experiment authors are under no obligation to follow up with the journal after receiving the review of their article from Mozilla. You can find out more in Kaitlin Thaney’s (Director of the Mozilla Science Lab) post here.

We look forward to learning from the results so we can improve the review process for scientific code.

  1. I direct an NSF research coordination network, the Network for Computational Modeling in Social and Ecological Sciences (CoMSES Net). One of our initiatives is to develop a model code library and to establish procedures and best practices for peer evaluation of this code. You might want to take a look at this (website above) and let me know if you’d like to coordinate on this.

    Michael Barton

Leave a Reply

Your email address will not be published. Required fields are marked *

Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top