One of our goals for package onboarding at rOpenSci is to provide a service to our authors by making their packages more visible once they pass through peer review. As such, we are planning an experiment in which we automatically submit accepted packages to the Journal of Open Source Software if authors request it, with JOSS’s editors being able to accept on the basis of the RO’s reviews.
JOSS has some slightly different requirements than RO. First, they require a short (<500 words) paper.md document with a high-level review of the software. Software must also be deposited into a repository such as figshare or Zenodo that provides a DOI prior to submission. JOSS’s software quality requirements are largely general principles, as they are language-agnostic. In general they are less stringent than RO’s R package requirements.Thus, with a few small additions an RO package can easily be published in JOSS.
However, so that our reviews are usable by JOSS, we would need to make some parts a little more structured and explicit. I’ve started a pull request that updates our onboarding repository to do so. Author submissions, and both editor and reviewer comments will include checklists in addition to freeform comments. These checklists cover everything in the JOSS reviewing checklist (e.g., https://github.com/openjournals/joss-reviews/issues/28), but with our more stringent and R-package-specific requirements. They provide a little more structure to our reviews, and attempt to move some gatekeeping checks up to the editors prior to assigning reviewers. This approach requires that our reviewers review the short paper.md as well, should authors choose to submit to JOSS.
You can see what all of the checklists would look like together in a review in this test issue.
I hope for some community feedback before we start this process. I am very excited about this experiment - it will reduce the dual-submission effort for authors and lead to more visibility and credit for our packages. Yet I have two primary concerns:
We already ask a lot of our reviewers - are the additional checks and paper.md reviewing too onerous?
Will the checklist format of the reviewer’s template lead to less thorough reviews?
Please take a look at the test issue and changes and give us your thoughts on the above or anything else!
I doesn’t sound like a lot of additional tasks for the reviewers… and maybe with the more structured template they’d save time compared to earlier reviews?
What exactly do you expect of thorough reviews? Maybe you could be more specific in “Are there improvements that could be made to the code style? Is there code duplication in the package that should be reduced?”? E.g. give a few keywords (in the reviews I got these were for instance non standard evaluation vs standard evaluation)? But this will depend much on the reviewer’s experience, with or without checklist, won’t it?
This seems like a good idea, for both ropensci and JOSS. JOSS doesn’t mint DOIs, correct? If so, then perhaps there could be an entire workflow that goes from onboarding through figshare/zenodo/etc. through JOSS; could that all be automated in some way?
I agree with @maelle that it doesn’t sound like a lot of additional review effort, especially given that paper.md will probably resemble a README or vignette.
JOSS does mint DOIs, bu these DOIs are different than those for the archive of the software itself. I guess this is because JOSS doesn’t have the infrastructure for long-term archival that Zenodo/figshare do. Perhaps @karthik or @arfon can elaborate.
We do aim to move towards more automation on this front, so that archival, release/tagging, JOSS, etc., all go automatically forward once a package is approved. I have a pipe dream that, once r-hub is up and running, we do automatic CRAN submittal, as well.
An important thing to consider. JOSS asks reviewers to declare they have no conflict of interest. While open review reduces the implications of COI, there still need to be some minimum criteria for COI. It likely means that, for JOSS submissions RO leadership, at least, should not be reviewing each others’ packages, or we’d need additional reviewers for such packages. What should our definition of COI be? We’re a small community so of course reviewers are like to have collaborated with authors at some point, but that’s true of most scientific fields.
Here’s one issue: is it a COI to review a package by someone who has reviewed/edited your package?
Not letting leadership review each others’ packages would probably mean that getting reviewing our internal pipeline of packages would be slowed down, but perhaps JOSS would be OK if one of two reviewers was external for a JOSS submittal.