Continue Discussion 48 replies
September 2018

Ranae

I had questions on this earlier this week over here at RStudio Community!

I’m getting a little code cleaned up and am wondering what the next steps are before submitting it. I’d love to see some presenters with experience getting their code reviewed - especially if they didn’t have many nearby colleagues to rely on.

September 2018

noamross

Question: Should this conversation cover reviewing code as a reviewer of the manuscript? Or only getting collaborators to review your code prior to submission?

2 replies
September 2018

ArtPoon

I happen to be working reviewing a student’s code right now, prior to submitting the associated manuscript for peer review.

1 reply
September 2018

seaaan

I have published code to recreate all analyses and figures with two manuscripts. I don’t have a systematic way of getting my code reviewed by colleagues (I have just asked informally), so I would be very interested to listen in on this call and hear what people do.

One point I’d like to add is that I prefer to deposit the code and data with the publisher and paper (rather than with a third-party like github or figshare). My reasoning is that there is then a canonical, unchangeable version that will persist as long as the paper is available (who knows how long github will be around?). It is also directly linked to the paper. Unfortunately I think some publishers may not support .zip files as supplementary information.

September 2018 ▶ noamross

stefanie

Good Q @noamross. I feel like this one should focus on getting collaborators to review your code prior to submission, or review code that is used internally (not necessarily for a paper)

1 reply
September 2018 ▶ ArtPoon

stefanie

@ArtPoon (great to see you here)
Your response gets at what I think many people will appreciate hearing. Many faculty members trying to figure out how to manage this. Thank you!

No date set yet for the call but I hope you’re able to attend.

September 2018

hye

What is the culture of the lab / group around feedback and code collaboration?

What are the use cases?

What are some practices that PIs can implement / adopt?

September 2018 ▶ noamross

karthik

My motivation for suggesting this call was to further a) but I don’t see how these are too different. Perhaps we can empower more people to do in lab code prior to submission and then have those make their way up to manuscript review.

September 2018

jsta

One challenge I see is how to provide a roadmap of a code base to a reviewer that links specific outputs to the code that created them. Makefiles are great for this but I’m not sure they are for everyone. Are there less intimidating (lighter-weight) alternatives or guidelines for doing this maybe in a README context?

2 replies
September 2018 ▶ jsta

noamross

I like to do a file tree in my repos: https://github.com/ecohealthalliance/HP3#listing-of-files

1 reply
September 2018 ▶ stefanie

mark_scheuerell

The irony of working for the govt is that everything we do/produce should be publicly available, but there are heavy restrictions on the tools/software at our disposal. For example, my agency received permission to use GitHub <2 years ago. The result is that not many are yet up to speed on its capabilities, particularly with respect to collaborative coding projects (eg, linking issues and projects in GH). I think many people would benefit from a general overview of the many possible avenues and their pros/cons.

That said, I typically work with 1+ others to review my code, which involves simulated data and verification. I also started writing supplemental markdown docs for nearly everything I do, which usually contain enough of the background/math/stats to understand what I’m trying to accomplish, followed by the code to read, munge, analyze, plot, etc. In addition to the benefits for internal sharing/reviewing, external reviewers/editors of papers also find it enormously helpful when they can replicate your entire process, figures, etc.

1 reply
September 2018 ▶ mark_scheuerell

stefanie

@mark_scheuerell Helpful perspective, thank you! I hope you can join the call (date tbd).

re: gov’t, not sure if you met @boshek (Sam Albers) at the unconf dinner. He, Stephanie Hazlitt (https://twitter.com/stephhazlitt) and @andy_teucher work in a British Columbia government group that has forged a path of working in GitHub, some open code, and submitting a package for rOpenSci review.

1 reply
September 2018 ▶ stefanie

cboettig

So far my approach has been to focus more on structure / organization than on function.

I expect code and other material associated with a given research project of a trainee to be on a (public or private) GitHub repository. I encourage trainees to follow the R package / ‘compendium’ model of organization for their repo (e.g. https://github.com/cboettig/compendium), with analyses in .Rmd notebooks, functions in an R/ directory, a README.md and a DESCRIPTION file.

Usually I don’t worry about this including unit tests or roxygen documentation. I like folks to have .travis.yml that runs some or all of the .Rmd notebooks, but this isn’t always realistic. In many cases it would take too long, and in any event it would almost always be a better check to have more minimal unit tests testing the functions/code against circumstances where we already know what the result should be, but in practice I haven’t been that successful in getting folks to write “trivial” tests. I’d like to also turn on linting as part of the automated checks, but so far haven’t inflicted that on anyone. (My classroom students we’ll probably be the first guinea pigs on the linting this semester).

Actual code review has been very ad hoc; i.e. I may comment on how code could be improved during a discussion with a trainee about a result or about attempting to debug something, but rarely have I done code review explicitly; unless we are developing a software package as the main objective. I do code review in my classroom, but would like this practice to play a bigger role in my research trainees as well. Often it feels hard to review code when you don’t understand what it is doing in the first place; but I’m slowly getting over that as I learn more about how robust and maintainable code should look (e.g. code smells) and feel more confident in suggesting how to improve code which I don’t understand.

1 reply
September 2018 ▶ jsta

cboettig

Great question, I agree this is a challenge. Personally I think Makefiles are often a difficult entry-point; as a reviewer I first want to have more of a big picture overview of what the project is about and what the final outputs are (e.g. the key figures or tables that make up the results of a paper), and work backwards from there to see where they come from.

A README is a nice start to this, but I prefer a more long-form vignette / paper.Rmd for a finished project. I think a vignette or paper.Rmd should keep its embedded code as concise as possible, moving any complex operations into stand-alone functions in an R/ directory. I think long-running code should generate intermediate, cacheable objects in as generic a format as possible; e.g. output a data tables as .csv files that can be read in if they already exist, or re-generated if they do not.

September 2018 ▶ cboettig

hye

Agree with a lot of points here, especially minimal unit tests against demo inputs, especially when the real analysis takes a while to run.

I do want to add that my hesitation against unit tests has a lot to do with how much refactoring I’m doing as I go through the project. It would be great to include when submitting code to accompany a paper, where the analysis is relatively fixed, but seems more questionable in day-to-day work.

It also makes sense to me that code review to help someone debug is going to look different to code review to make sure results are computational reproducible.

2 replies
September 2018 ▶ hye

noamross

@cboettig’s opinions remind of what has been a perennial challenge on this front: to have a review process, one needs some common agreement on what a project structure should be, and what standards should be. Expectations need to be set at the lab/group level for this. My current opinion is that it’s not a problem that can be solved for large communities. Nonetheless, last year at the unconf some of us took a crack at creating a checklist that is agnostic to language and structure/build systems: https://docs.google.com/document/d/1OYcWJUk-MiM2C1TIHB1Rn6rXoF5fHwRX-7_C12Blx8g/edit#

September 2018 ▶ hye

cboettig

I’m :100: percent with Hao’s comment about how refactoring makes the notion of unit tests pretty hard in research code. In software development, I often have a reasonable idea what the API should look like, so even while I still need to do a deal of refactoring of internals, it doesn’t break anything. In contrast, research code often isn’t even functions to begin with. Still, it maybe possible to think of a somewhat different testing/assertion paradigm for (rapidly evolving) research scripts. For instance, there was an interesting effort to develop a testing framework for Rmds at the 2017 unconf; https://github.com/ropenscilabs/testrmd.

Ironically, I think the more common problem I encounter in trainee code is too little refactoring. Am I alone in thinking this? I believe it’s easy to get stuck feeling “this giant block of code took me 2 weeks to write, I’ll just continue to modify and extend it”, when the more sensible thing is more often to re-write into smaller functions, not bigger ones. Refactoring a script that already “works” can seem like a waste of time and only risk breaking things. When is it time to refactor, and how do folks encourage refactoring?

1 reply
September 2018

naupaka

A few thoughts:

I think a simple checklist of the most critical things to get checked on all analysis code before submission of a manuscript would be super helpful. There are a lot of niceties that could be included, but perhaps fewer necessities, especially for students and others who are relatively new to R. I think there’s also a distinction between guidelines for data/code/archival quality and guidelines for stats/viz/analysis quality, and those are related but not necessarily entirely overlapping.

Relatedly: I think there’s a difference between code that is central to the findings of a manuscript (say for theoretical or simulation studies), and code that is used in the presentation of results. For example, an undergrad or graduate student who has analyzed their data and written it all up in an Rmd might not need the code to be fully functionalized and refactored and unit tested, but I think it is absolutely still critical that it is documented and organized such that it at the very least easily runs on another machine without much fuss. In classes, I have Travis check for successful knitting of student assignments (submitted via PR on GitHub), which requires machine readable documentation of non-base packages, including any installed from GitHub, and data in the repository (with proper relative paths!) or else pulled from the cloud, and working syntax. I also have everything strictly linted with lintr to enforce basic code style conventions. For (often DNA sequencing-based) analyses with large data files (10s to 1000s of GB), the primarily analysis scripts are pulled out into R scripts and run separately, and then the intermediate results are saved as a csv when possible and Rdata when necessary; this then become the data that’s loaded at the top of the Rmd to further munge, visualize, run stats on, etc. Only the code in the Rmd has to run successfully to keep Travis happy. I haven’t yet found a good way to automate testing on scripts that take days to run on a huge server. I suppose linting those too would be a fair start, and perhaps encouraging more liberal use of stopifnot() throughout?.

Repos that are associated with manuscripts get a version tag and are archived to Zenodo, which mints a DOI, which we then cite in the manuscript.

I think a big part of doing code review on these type of project efficiently and effectively is having a good default compendium-type structure, as @noamross and @cboettig mentioned. I have personally felt a little torn between some of the different options out there (ProjectTemplate, rrtools, the others listed at https://github.com/ropensci/rrrpkg and the Compendium Google Doc etc).

September 2018

jules32

Hi – This is a really interesting topic and I’d definitely be interested to join the call to discuss it further (thanks for looping me in @stefanie!)

One quick thought is that with code review I immediately think about who would be involved. PIs? Collaborators? Lab Groups? I think there has to be a shared culture for coding at a pretty basic level for labs/teams, focusing on structure and commenting, hopefully with notebooks.

Our team has shared practices, look over and run each others’ code, do individual checks with functions we’ve written, and we ultimately do GitHub releases before submitting papers (and also annually for our repeated annual studies). But it really takes this culture of shared practices and time to spend on looking at each other’s code, and increasing the value of it to give people the time is an important piece.

2 replies
September 2018

aprilcs

This is something I have been thinking about & working on lately. The review guidance I present to my workshop attendees is focused on code review for the purpose of publishing computationally reproducible code that supports a paper, so perhaps a different objective than others in the thread. Looking forward to hearing how others do it in their labs!

My current (and always evolving) checklist is:

Organization

Documentation

Automation

Dissemination

September 2018

hye

Refactoring is one of those things that I think is important for code maintainability and reusability, but isn’t really discussed or taught. Some of the obstacles to implementation that I’ve been thinking about:

(1) concern about breaking working code; this could be addressed by better training and implementation of practices around version control

(2) unclear reward for the effort; dedicated time for code review could help with this - another person’s perspective can point out where code can be improved (or maybe even you swap and refactor someone else’s work) - and setting aside time elevates the task to something that is valuable and should be incorporated into regular practice.

September 2018 ▶ jules32

stefanie

@jules32

it really takes this culture of shared practices

yep - always culture/people + tech

September 2018 ▶ noamross

isteves

@noamross I really like the file tree! How do you generate it? It’s something I’ve thought about doing a few times, but I haven’t actually done it.

1 reply
September 2018 ▶ isteves

noamross

I use the tree command to generate it initially, pipe it to a file then edit manually. I’ve long thought of making something that makes updating/editing it easier, but haven’t had the time.

1 reply
September 2018

noamross

A thing I notice in checklists here, as well as those we use for RO package review, is the challenge of coming up with a general approach for reviewing whether the code is doing the thing it is supposed to. In theory, individual projects should have unit tests for this, but it is hard for a reviewer to know whether they should trust the unit tests, and hard for the author to write tests for errors they don’t anticipate. “Coverage” statistics are of limited value.

I don’t have a great solution for this, but I guess I want to make sure that any checklist has something like, “Are you confident the code implements the methods as described?” It’s a big job for the reviewer but it’s the central point, and one that can get a bit lost amongst all lists of best practices.

September 2018

jenniferthompson

THIS very much. I would love to have code review in place in my department (~90% of us code in R, though we use various approaches). But the very first thing anyone says when I bring it up is “who has the time for that?” Everyone is understandably concerned with getting projects to collaborators, etc and formal code review is somewhat of a foreign concept, so any type of review we have is basically ad hoc if someone hits a snag. Convincing enough folks to value code review enough to dedicate the time to it - often looking at someone else’s complex data wrangling code from a large multicenter study - is a big hurdle.

1 reply
September 2018

zabore

I’m in the midst of trying to get some type of code review going in my department, so I have some thoughts on this topic. I work in a group where (almost) all of our projects are conducted completely independently, and people program in R, SAS, Stata, and possibly other programs. Additionally, most data cannot (strictly) be shared even among members of our group due to HIPAA issues.

I believe the comments about the necessity of a culture of shared values surrounding coding are spot on, and in trying to get code review up and running in my group I have encountered some challenges related to this. The most common arguments I hear are a) it will be too time consuming and b) my code isn’t good enough to share.

Instead of peer-to-peer code review, which was facing too much resistance, I recently started a “Coding Workshop”. At regular intervals, someone submits a piece of code for review and a reviewer is assigned. The code is also made available to the rest of the group in advance. Then in the meeting, the code author reviews the point of the code briefly, and the reviewer goes over their comments and there is general discussion. The main points we have asked reviewers to focus on, since no data sharing is involved and therefore the code can’t be run, are:

  1. Is the code well commented?
  2. Is there repeated code that could be eliminated through use of a function/macro?
  3. Are numbers used in tables and reports generated automatically to avoid typos or copy/paste errors?
  4. Are there any functions or methods you know of that could improve efficiency?
  5. Is the code readable?
  6. Was there something you learned from reading the code that you would use in the future?

While this process will not catch errors in code that is producing results for manuscripts, the hope is that it will a) help everyone learn better programming principles and b) make people more comfortable sharing their code so that we can ultimately implement a more rigorous code peer review process.

Clearly we are a group in very early stages of thinking about best practices and some sort of shared standards across our very independent work, but I have gotten a lot of positive feedback on Coding Workshop after the first couple of meetings, so I hope it will be successful in leading to more rigorous efforts down the line.

September 2018

brunj7

Hi there,

And thank you for the interesting discussion!

I wanted to share some steps we follow on the Scientific Computing Team at NCEAS when we work with scientists in archiving their products, which often consist in set of data inputs, scripts and outputs (data or/and figures).

  1. We try to run the code; yep, sounds trivial but often already let us check if we have all the necessary libraries and sourced codes, such as scripts containing custom functions. Moreover, I think the most important check this step does is data access. To be able to run an analytical scripts, you need to have access to all the input files, which can be problematic under the scenario of external review, especially if you process large datasets that might already be the results of a data collation effort. It can also be problematic for internal review, if you do not have a centralized way of managing your data (e.g. on a server with shared directories)
  2. Once we can run the script(s), we check that we get the same results as the output files that were provided to us. If it goes well, we move on to the next step; otherwise we start a discussion with the scientists. Mainly it checks potential version issues (both in script or data) and also runtime environment differences (pretty rare in our case, as we often set up our collaborators on our analytical server).
  3. Then we start to look at the code into more details; but I would not say we do a in depth review of the codes, as some are very specialized using complex models (which probably raise the question on how to clearly scope the code review process for reviewers). So far for this archiving step, we have mainly focused on improving code commenting to make sure others scientists can understand well what is going on in the code. We also ask our scientists to describe well their workflow when there are several parts/scripts to their analysis (still looking for the best tool to do so!).
  4. Finally, and this is a work in progress, we would like help scientists to modify their code from reading data from local file systems to directly pulling data from repositories. This was one motivation to start developing the metajam R package (https://nceas.github.io/metajam/) with @isteves and Mitchell Maier; aiming to provide simple functions to do so. We are not quite there yet and it might be outside the scope of this discussion, although with the growing requirement of archiving data with publication, it might be an interesting recommendation to facilitate the code review process.

Some other thoughts – to me it seems that code optimization is different from code review (as asked here). Refactoring your code to make it modular, reusable and or profiling it to make it more efficient is often a hard sell to scientists, especially when they are “done” with their analysis. I agree that working on training and recommendations on how to structure your projects (I need to check some of the propositions in this thread!!) seems the way to go and that these changes would be hard to achieve via code review. This being said, I think that an output of the review process should be to define and set up unit tests on the scripts that have been reviewed; this would be a good way to check on further developments/improvements.

I hope this is useful!

Julien

1 reply
September 2018 ▶ brunj7

stefanie

I hope this is useful!

Fantastically useful. Thank you for the clear layout of your approach @brunj7.
We’ll be arranging the details of this community call soon.

September 2018

prosoitos

A bit off topic, but I am giving a lot of thoughts these days about how to move towards having journal editors require the submission of code, along with a paper and its data.

My experience is aligned with that of @jenniferthompson and @zabore, but much worse: my supervisor does not use R; nobody in my lab does any code review (even though most students use R); PIs only look at the results of analyses and question the statistical methods used, but never the code to achieve those; there is never any refactoring done by anyone-the sole idea of it would surprise everybody I work with; even basic code formatting is all over the place, so forget about writing descent code using functional programming to replace copy-paste or crazy loops…; absolute paths, setwd(), and other forms of non portable code cripple everybody’s scripts; nobody uses GitHub or even version control. Our culture is so far away from anything acceptable on this front that it will take years and years to get to any reasonable place. And that is why I feel that things will only start to really change when there will be pressure from higher up (meaning the journals) to provide code. Until then, people don’t know, don’t care, don’t have the time, don’t have the incentive, don’t give any thought to the subject of writing readable, portable, and reviewed code.

Reading some of the posts in this thread, I was impressed to see that in other labs, things are much further ahead. But I think that my lab is, unfortunately, more representative of a classic university research lab. There is a lot to do. And things are often so bad that doing it from the ground up seems unrealistic. And a top down incentive seems to me to be the only way to shake things up. It could also be a way to impose some form of norm. But I have no idea how to walk towards this goal.

1 reply
September 2018 ▶ prosoitos

prosoitos

(I am aware that my post is very naive and I am extremely thankful and excited to read all the compendiums and other great links on this thread, and papers that were published on the importance of code publication. But all of this feels like bottom up grind work and I am pessimistic about when this might reach over to my lab and countless others like it. That’s why I would love to hear about approaches to reach out to journal editors or funding agencies like NSF-things that are BIG incentives to research labs and make things change on the large scale. In Canada, the 3 main funding bodies (the Tri-Agency), recently made a wonderful move towards open access and that is really making things change over here (not about code however). But maybe a lot of this sort of bottom work needs to be done before big funding agencies or journals can be convinced to set policies that will then force the generalization of these better practices to a much wider community of researchers?).

But I am getting more and more off-topic. Sorry about that.

2 replies
September 2018

stefanie

Save the date :spiral_calendar:!
Community Call on this topic takes place Tuesday, October 16, 2018, 9-10AM Pacific (find your timezone)

Agenda:

More details to come very soon

1 reply
September 2018 ▶ prosoitos

hye

@prosoitos - I feel your pain, and I have also encountered labs where spending time on code review or refactoring would be scoffed at. I also agree that a big lever here is for funding agencies and journals to be involved in promoting better practices. Unfortunately, I think it needs to be more than just requirements for code sharing, because that doesn’t address standards or enforcement. My hope is that funding agencies see the need to implement both requirements and support training for entire research labs, since I imagine there are plenty of places that would be interested in improving practices, but can’t overcome the barrier of changing on their own.

(And if you want to chat more, feel free to reach out via private channels.)

September 2018 ▶ prosoitos

noamross

You are completely right about the leverage that journal editors and granting agencies have. Journal policies have been a very useful tool in similar areas, such as expanding data publication requirements. In my field, ecology, the adoption of preprint and data-deposition policies occurred in the major journals occurred largely in the past 10 years. We slowly see this happening with code, too - partial policies like code upon request (example from Nature) are useful. They give reviewers the tools to request code and start to push for standards that eventually can make their way up to policy. I pretty much always make such requests if the journal has such a policy and attempt to reproduce results, and I know this provides a pretty powerful incentive for the authors! (This can also annoy the authors a great deal, so it’s important to be helpful and constructive when reviewing the results so that they appreciate the feedback.)

If you want an example of lobbying effort, I sent this letter regarding data access and preprints to the editor in chief of a journal in my field about three years ago, and 80% of the recommendations were adopted. This was accompanied by some personal lobbying, which is the pattern I’ve seen with other journals - a few private and public letters plus some conversations with colleagues at a conference can go a long way. I imagine enough places have adopted minimal code-sharing policies now that they could be used as examples. Most editors are eager emulate the policies of what are perceived as prestige or competitor journals, so when a big campaign pushes a Nature to change policies, it makes it much easier to leverage that to lobby for policies in more niche publications.

1 reply
September 2018 ▶ noamross

prosoitos

Wow. This is fantastic. Thank you!

Your lobbying efforts are really great and a beautiful example of how to have an impact at the individual level. This really answers a lot of my questions and is extremely inspirational. Thank you very much for sharing!

October 2018

keith

In addition to requesting / requiring code in relevant submissions, one could also imagine journals recruiting reviewers specifically to evaluate code / reproducibility.

While it is obviously not reasonable to expect such a reviewer to exhaustively evaluate the validity of large and complicated software, there are some very basic and easy things that could be checked with minimal effort.

For example, I recently participated in a three day reproducibility workshop at NIH which attempted to teach researchers about the principles and best practices of reproducibility by reproducing ~10 bioinformatics / genomics papers which appeared to have all of the information / code necessary to be easily reproduced. Within 2 minutes of looking at the RMarkdown code for the very first paper, it was obvious that it had no hope of ever being run (referenced variables not defined anywhere in the file).

Some things that would be easy to check:

In about ten minutes of downloading a script / software and attempting to get it running, you could at least make sure it passes these minimum requirements.

2 replies
October 2018 ▶ keith

rywhale

Seems like these kind of checks could be fairly easily bundled into a package. Does this functionality already exist (in devtools or elsewhere)?

I imagine something like

reprod_check <- check_reprod('myscript.R')

That would return lines containing undeclared variables, setwd() calls, etc.

I suppose the alternative is to just encourage researchers to bundle their code into packages to accompany publications, thereby addressing the documentation issues, calls to libraries, etc. but that might be an unrealistic ask…

October 2018 ▶ keith

stefanie

one could also imagine journals recruiting reviewers specifically to evaluate code / reproducibility

rOpenSci actually has a collaboration with Methods in Ecology and Evolution (MEE). Publications destined for MEE that include the development of a scientific R package now have the option of a joint review process whereby the R package is reviewed by rOpenSci, followed by fast-tracked review of the manuscript by MEE. Authors opting for this process will be recognized via a mark on both web and print versions of their paper.

Described here: rOpenSci | Announcing a New rOpenSci Software Review Collaboration

In this case, rOpenSci manages the package review process so it’s not the journals recruiting reviewers, but it’s a good start

October 2018 ▶ stefanie

stefanie

Details of this Tues Oct 16 Commmunity Call including how to join: https://ropensci.org/blog/2018/10/05/commcall-oct2018/

Pass it on!

October 2018

stefanie

Resources on this topic, in no particular order (add yours!)

October 2018 ▶ noamross

Myfanwy

I would love to help you do whatever it would take to make this thing!

October 2018 ▶ jenniferthompson

Myfanwy

Very similar boat at my workplace. Additionally, all the coders we have are essentially at the same level - doesn’t mean we can’t help each other, but does mean we may have a bit of a plateau effect when it comes to improving code through review…

October 2018

aprilcs

Thanks for the great community call today! As promised, a quick post about code review as part of the peer review process of journal articles. Code Ocean is piloting code review with Nature currently, you can find some details here: https://www.nature.com/articles/s41592-018-0137-5; and the perspective of our developer advocate, Seth Green, on the code review process here: https://codeocean.com/blog/post/nature-journals-pilot-with-code-ocean-a-developer-advocates-perspective

Generally speaking, the code review process is as follows:

  1. Authors upload a working copy of their code to Code Ocean.
  2. Code Ocean verifies that the code runs and delivers results.
  3. Code Ocean provides Nature editors with a private link (blinded or unblinded) to the code capsule for peer review of the code.
  4. Once the code and article are approved by reviewers, Code Ocean will mint a DOI and include a link to the article in the metadata.
  5. Nature includes a link or embed the Code Ocean widget in the the article.
  6. Nature readers will be able to run code and reproduce the results associated with an article by simply clicking a button, as well as edit the code or test it with new data and parameters.
November 2018

stefanie

Not everyone reading about this stuff knows what “refactoring” or “linters” are. In the summary blog post about this Community Call, we’d like to link those words to definitions. Anyone have favourite? Otherwise I’ll go with wikipedia

1 reply
November 2018 ▶ stefanie

cboettig

good call! For linting, it might be more helpful to most of our audience to link directly to https://github.com/jimhester/lintr.

for code refactoring, the wikipedia entry looks like a good choice to me.

November 2018

stefanie

Thanks Carl! Do you have recommendations for links for unit testing, continuous integration, and container as well?

November 2018

stefanie

recommendations for links for unit testing, continuous integration, and container

November 2018

stefanie

We’ve published a summary of the Community Call on this topic, written by the speakers, Hao Ye, Melanie Frazier, Julia Stewart-Lowndes, Carl Boettiger, and rOpenSci software peer review editor Noam Ross!