At #runconf16, we discussed building an automated checker for packages submitted to ROpenSci to that would check some things beyond R CMD check and provide a report to editors/reviewers so as to reduce what they had to check manually. Reviewers: what kinds of things would you find helpful?
Here are some things we came up with,
A lint() report checking things like consistency in function names
Is there a vignette?
What is the test coverage?
The NOTES/WARNINGS/ERRORS from R CMD Check
what CI service is being used? (A Linux AND Windows CI?)
README.Rmd is used if there is substantial code in README
Lines of code in .R files, Roxygen comments, and tests
List of packages in Depends
List of packages used that are non-recommended scaffolding (XML v xml2, etc.)
Is there a code of conduct?
print() or cat() calls in functions
This pre-check has two purposes: to identify things that we would want an author to fix before review (the editor check), and to identify things that a reviewer may want to focus on during the review. Both ultimately aim to reduce the burden on reviewers and to provide the authors with faster feedback on some things.
Note that not all of these are requirements. For instance, we have recommended scaffolding packages, but authors can use others if they have good reason. But we think it would be useful for reviewers to be alerted to these cases.
What else might we consider putting in such as report so that it is useful (and also concise)?
On CI, a Windows CI is really only important if code is platform dependent in some way (compiled code, text encoding, a handful of platform-specific functions, and very few other edge cases), so that probably shouldn’t be required. Or, maybe it’s possible to identify some heuristics for when it is really needed.
I’d say look at what BiocCheck is doing, and possibly just apply it as an extra check if you agree with most/all of it.
I know it checks function length, things like value fields in documentation of functions, whether all the vignette chunks or examples are wrapped in dontrun/eval=FALSE and a lot of other stuff.
I’d also say that, like BiocCheck, you should make it so that users can do
R CMD ROSCheck pkg.tar.gz and get a nice check-like summary out.
Just continuing to park some ideas here. Now that we’re using goodpractice::gp() on most submitted packages, I should note that it allows one to set custom tests, so we may be able to add tests such as those for recommended packages (Rcurl v httr, xml2 v XML, etc.).
Would this be easier if people submitted their packages with a form, and the bot initiated the review issue? The form would ensure that fields like the repo URL are formed properly. This is what JOSS does.
Slighty off subject – should reviewers / the automatic review check that WARNING are treated as errors on the CI tool used by the package author(s)? It’s the default option for Travis so I’m not sure it will often be an issue.