Thoughts on ropensci process for contributors and reviewer / maintainers?

Just read this little blog post on how docker project does code review and builds a community of both volunteer & employee maintainers.

Obviously plenty of differences between that project and ours but lots of parallels too, thought folks here might enjoy reading. Wonder if there’s any lessons for us there (staged review process? Balance of contributor vs reviewer time?

Also see part 3 on some clever automation to help this approach scale

Interesting stuff. Based on feedback, we can certainly be a bit more clear on the stages for both submitters and maintainers (maybe this involves cleaning up our tagging a bit).

I wonder if we want to emulate the “community maintainer” model, which for us might be “community editors.” If volume goes up , we’ll eventually need more editors. But to follow this model (I’m unsure we should), we’d need a way for people to pitch in without being asked and become active contributors so as to be promoted to editor. Should we encourage people to follow onboarding and chime in on any package under review?

1 Like

Thanks for sharing @cboettig

Would people be up for having our tags ordered with numbers, as in the docker eg? For example:

  • 1/editor-assigned
  • 2/seeking-reviewers
  • 3/reviewer-requested
  • 4/review-in-awaiting-changes
  • 5/approved

I’m not sure we’d need to ask people to chip in without being asked, and I’d be surprised if that worked (but I could be wrong!) since most of our community are academics and I imagine they don’t get credit for software review, whereas devs contributing to Docker I imagine have incentive b/c they are software engineers, and software contributions are valued in that community. We have a pool of reviewers, and it grows through time, and through time I imagine it will become clear which are putting in time. We could as needed ask those that have reviewed a lot if they are interested in being an editor - OR people can nominate themselves or someone else

We could definitely do more on the automation front to help the process along. E,g., having the bot look over submissions and see if there’s anything they missed including. The blog post talks about automatically changing/adding labels - that could be of interest as well

I like @sckott’s numbering scheme. Slight tweaks:

1/editor-checks (Editor has been assigned and is doing an initial review/checks before deciding to send it out)
3/reviewers-assigned (Switched when we find a second reviewer or give up on finding a second reviewer)

We should also make a habit of making the handling the editor the “Assignee”. That way we can filter by editor and you can see who is the editor right in the issue list.

All sounds good @noamross

any objections @cboettig ?

OK, numbered tags as above are now on all open submissions, with the additional new tag of 0/presubmission for pre-submission inquiries. I’ve also made sure to put the handling editor as assignee.

thanks @noamross - looks good to me