Multi-stage code review for manageable size

@jhollist just submitted a great review of osmplotr that included this line:

Where possible I have made comments about the code itself, but given the size of the code base in osmplotr I was unfortunately not able to spend too much time digging into the specifics.

I feel this could be a frequent challenge: reviewing a large package may be too much to expect of reviewers (Jeff mentions that his review took 10 hours).

I previously suggested, based on Mozilla’s experience, that we try to limit the size of reviews. I said 400 lines of code before. This might be too little. But perhaps we could try to encourage authors to submit packages for “first” review at some early stage when they have some core functionality built, even if they don’t think the package is feature-complete. This would have the additional benefit of getting authors on board with various best practices, tests, CI, etc. that have benefit during package development.

At the very least we could warn that the longer the package, the longer it will take to review! I suppose I could take a look at the previous review times and see what the relationship is.

1 Like

@noamross, I had the very same thought.

This was my first code review and I was a bit daunted (this is NOT AT ALL a dig on @mpadge’s package). Had I been able to see earlier versions with just the base functionality a line by line review would have been more manageable. Not sure I have any great ideas about how to facilitate a multi-stage review, but being able to comment on the package as it is being developed at maybe a “proposal”, “core functionality”, and “First (or second, third, …) Release” stage would have been nice.

Agreed, we should put this in the instructions for authors

we should also add a note about this, that the more code there is, the longer it could take to review - and maybe the could changes to will most likely based on your review of the data

I really like this idea. Might be good to have short checklist (& maybe an example) of a stage-1 package for review. Could be super helpful, but I am worried that we might wind up just increasing the overall burden on reviewers:

One concern I have is that for some packages at least, this may be kind of tricky from the developer’s perspective – but maybe it’s just me. Even when I begin developing a package thinking I have a pretty clear idea of what the package should look like, I often change this as I go, and the first ‘minimal functionality’ I write rarely survives into the more feature-rich stage. I’m all for the benefits of early feedback, but I do worry this would just raise the burden on reviewers further without resulting in any less total code to review at the later stage.

Meanwhile, I also find that many packages that may have lots of end-user functions and / or loads of code can none-the-less follow the same general pattern, and reviewing only a few functions closely is usually sufficient. (And when that’s not the case, I might be tempted to argue that implies maybe the user API could be slimmed down a bit to make it more manageable? I’m certainly guilty of that myself). Maybe we could ask reviewers to focus on a subset of the functionality for such large packages?

Code review is really helpful, but where it is most helpful is at the
design stage. In particular, I think an early review of the designed
public-facing API would be useful even before implementation. We spend a
lot of time thinking about our software APIs, iterate on them many times,
and still end up finding we need to change it in the end. So, getting
broader input at the design stage would be super helpful.

Reviewing code for correctness can be helpful but is a huge time burden.
In terms of focused effort, I think it would be best to spend time
strategically reviewing tests, rather than implementation code. A good set
of tests will make it obvious whether the code is likely to be correct. We
look for test suites that test not just whether a function works as
designed, but also whether it works properly with bad data, gives
appropriate error messages with bad data, and works under various
situations (logged in, not logged in, etc). So, carefully reviewing the
test suite for even one function can give really good insight into the
overall quality of the code (assuming they are all similarly thorough or
not thorough), without looking at even a single line of implementation
code. And as the test suite is generally written against the public API,
it also will show a good working model of the code in action.


This is definitely the key issue.

We have a substantial number of packages that come in with little or no testing. Maybe early intervention on this front will make later reviewing easier.

If we implement @aammd’s idea of having packages submitted as pull requests, we could have them as PRs against a repository that could run some tests. (Or maybe the ro package could have a ro_check() function for this). These wouldn’t necessarily be required to pass for submittal, but they could reduce the burden on the first-pass reviewer.

Agree @mbjones it’d be nice to get software submitted early on, and we’ll encourage that. However, there are some folks that have existing software in some non-nascent state, and then decide to submit to our suite - these are of course past design stage.

On tests, I like that idea to look at tests - though as Noam said, some submissions don’t have tests, making that not always do-able