There are a lot of steps needed to transform our success in OpenTB Round 2 into a working diagnostic. The Gates foundation has given the Das Lab a grant for the “hardware” part of that transformation. But when the dev team started thinking in more detail about the whole process, we realized the weakest link in the current process is getting rapid player feedback, in terms of new designs that address changing requirements, as the engineer/experimentalists narrow down the detailed design of a point-of-care device.
The most obvious case-in-point of this is getting array results for the Round 4 OpenTB designs. This has been an issue because Johan Andreasson, the post-doc who helped pioneer this technique and who personally conducted the Eterna array experiments, has gone on to a new job. He is still available for consultation, but can’t spend the time needed to guide the experiment through its many steps. Fortunately, Feriel Melaine, the new post-doc who has successfully replicated the array results for the AK2.5 design, but with a bead-based experiment, has now agreed to take on the task of getting us array-based results for the Round 4 designs.
But in addition to that, we realized that we didn’t really have adequate structure in place to do a rapid test cycle on the order of one every two weeks. The lab believes they can get their part of the work (receive the list of designs, order and receive the DNA templates needed as inputs to the experiments, run the experiments and return the data to players) in one week. But a two week cycle implies that players would then have only one week to look at the results, analyze them, disseminate that analysis to other players to create and submit new versions of their designs and then collectively decide on which designs should be submitted to start the next rapid feedback cycle. We have no precedent for having done that before. Now, we need to create a process to do that, and we’ll be using the extended Light-up Sensors project as our testing ground.
Hoping to help organize the discussion a bit, I propose subdividing the discussion of the player part of the process into some individual steps. Starting with the next step falling into players’ laps with the first 6 designs (currently in the lab):
- Players (typically the more experienced ones) analyze the results and try to distill what it “means”.
- Analyses are disseminated in a form that is easily accessible by all lab players.
- Players submit new designs based on what they think they have learned from the analysis and subsequent discussion.
- Players select a new set of designs for the next rapid feedback cycle.
I’ll immediately follow up with a separate “Reply” on each of these substeps, and if that organization fits with what you want to say, you can “Comment” on the corresponding “Reply”. But there’s no need to feel constrained to that structure if you choose.