Thanks a lot for offering to help.
What I have in mind at present is mostly your suggestion 2: parse the raw instrument file to a tidy data frame, and perhaps in time evolve a framework for easily writing these parsers.
I also like your idea of letting users add extra data (eg sample identifiers and location of controls) in plater-format.
Here’s my initial thoughts on which columns such a data-frame should have for a kinetic absorbance experiment in 384-well format:
readerfile (char) the name of the raw file parsed
barcode (char) the barcode of the plate (my reader can read barcodes)
well384 (char) the well (A01, A02, … P24)
absorbance_nm (num) the detected wavelength in nm, eg 405
kinetic_step (num) the cycle number (1, 2, … up to number of kinetic steps)
kinetic_sec (num) seconds since beginning of experiment
OD (num) the measured intensity (for absorbance typically between 0 and 3)
chamber_temperature_C (num) the temperature in degrees Celsius.
warnings (char) warnings reported for the plate or well
So the first row could look like this in csv-format:
I consider this to be the minimum of information needed for analyzing the results (I guess the kinetic_step is redundant, but it is very convenient). Any immediate comments on this?
Further columns I’m considering, but less sure about:
- kintic_timestamp (date time) wall clock time-stamp of measurement
- table_version (char) the name (including version) of this format. In the case above it could be “kinetic_absorbance_384_v1”
The table_version could also be an S3 class, but an advantage of a column is that it survives being stored as a csv-file.