As an offshoot of the discussion Comparing SHAPE data between different designs and different labs, Rhiju has asked me to shepherd a project that is designed to act as measure of the overall reproducibility of SHAPE results from one synthesis round to another. The general idea is that we will come up with a set of RNA sequences that will be sequenced with every synthesis round, and then monitor how consistent the results are from round to round. We haven’t documented all the details, but here are my thoughts on how it will work. Just be warned that some details may change.
As players, we’ll decide on about 40 sequences to be synthesized each round. In order to test all potential sources of variability, the bar codes will be (more or less) randomly reassigned for each round. Over time, we might decide to replace a few designs with ones that we think will be more informative. But for the most part, it should be the same set of designs, over and over again. I’ll take responsibility for some combination of a standardized data analysis and a narrative evaluation of each synthesis round, and of course the data will be available for anyone to notice anomalies.
So what 40 sequences should we choose? They don’t need to all have the same target folding or even the same length. In fact, I am thinking that we’ll want as much variation as we can think of – short/long, stressed/non-stresses, Christmas trees and cub scouts, small loops, big loops, long stacks, short stacks – you name it. It isn’t a requirement that they be sequences that have been synthesized before, but if we do have some of those, we can start getting comparisons with the first round, which seems nice.
The ground rules for this lab don’t quite match up with anything we’ve done before. There won’t be any points associated with it, and Jee and I will be responsible for working out the details of how the sequences get into the system. For now, I’ll start a wiki page for collecting the sequences.
So what sequences would you like to be part of this lab, and what makes them good candidates? To get things started, you can just post your ideas here. If you’re interested in following through, you should probably create a wiki account so you can be a first class participant.
I like your idea here.
I have an idea for one of the tests. I would like to see two designs that are mirror twins, to have a long stem and a few shorter. In one of the designs the long stem should come first and in the other it should come last. I would like to see if such a pair will do similarly or there is some effects on the lab data (or perhaps chirality effects). I will like to know if it matters where the long stem stem is placed in relation to the short ones. If things like that can affect the data results.
Keep in mind that this lab is structured differently than all the others. Even though the lab will get run every synthesis, the variable will be the synthesis run and the constant will be the sequences. So it is better suited to answering a question like “Does the quality of the experimental process vary more or less when a specific sequence is synthesized 3’ to 5’ instead of 5’ to 3’?” than "Do sequences (in general) fold differently when synthesized 3’ to 5’ instead of 5’ to 3’. In the latter case, a pair of standard labs (either with you having control over which submissions are synthesized, or you taking an active role in directing voters’ attention to which submissions you want synthesized) would get you more data.
Having said that, I encourage you to go ahead and submit a pair of sequences. Unless perhaps there is a flood of sequence submissions, and we have to make tough choices, I can assure you they will be included.
If we are to resubmit previously chemically mapped sequences, it should mean that the barcodes that were used, will be reused too. Right? And it means that those barcodes have to be marked as reserved, already at the start of a new round. Am I correct? If so, we only have 2-3 days to pick all the candidates, or it may have to wait until the stat of the next round of Cloud Lab.
The barcodes are not going to be preserved. At first, this seemed to me like a drawback, but I’ve changed my mind. It does add one more source of variation to the results for the main design, which will be confounded with any experimental error due to process variation. But if we view this from the point of view of the non-player scientist, who doesn’t have to (or get to) specify the barcode, the barcode is simply another source of experimental error.
Note that there is currently a philosophical split between player-proposed labs, where the barcodes are considered to be part of the design and are thus included in the scoring, and expert labs where barcodes are considered to be part of the experimental process. The plan for the reproducibility lab is consistent with the latter model.
I should add that I’m actually still unclear whether there are synthesis slots available in the lab set that is closing this weekend. If there is, the reproducibility lab will probably have only a small number of sequences, if for no other reason than we won’t be ready with 40 good choices.
Also, the wiki page is up.
Thx for the advice. I think I will go make two standard lab with mirror structure. I think I understand better now what was the main focus of the lab.
… or start with a past one, calculate and publish the mirror sequences, and encouraging players to choose from, and vote for, those.
That’s a fine idea. Bought.
If you could reduce the request to “specs” and shoot for next roundI think would be better.
“specs” = a list of requirements you want to see.
Yes, contribute in whatever form you are interested in doing. And at this point, I’m quite sure nothing is going to be cast in stone on this round.
I just had this exchange with Rhiju:
_rhiju → Omei: please submit up to 40 sequences, this weekend if possible!
Omei → rhiju: Will do. Am I correct in thinking that can refine the set over time, if we decide we would benefit from different sequences?
rhiju → Omei: yes of course. I think we’re granting you (and the players you represent) a ‘standing order’ of 40 sequences over each cycle to characterize errors and to make suggestions to us devs for viewing & experimental improvements. I’m excited to see what you find!_
So I’ll work on it this weekend, with anyone else who wants to take part. It’s obviously a short time scale, but we won’t be stuck with any decisions that got made in haste.
Here’s a summary of what got submitted for the current synthesis.
Nineteen specific sequences were nominated by players. I view this as being the first pass on the collection of sequences that will be include in every synthesis round.
To take advantage of the full allotment of 40 slots, I chose the other 21 sequences from the previous synthesis round, since there have been many questions about whether something about it was atypical. For three of those labs (Semicircle 2 Bends, Location Dependent Chemical Footprints 5 and Location Dependent Chemical Footprints 6), I chose 5 sequences each.
From Location Dependent Chemical Footprints 1, I chose just one sequence (AGCACAAGAUCAGUACAAGUGCGAAAGCACAAGUACAGACGGAAACGUCAGAUCAAGUGCAAA ) and repeated it six times. This will be the most direct test yet of what effect the barcode has on the SHAPE results and overall score for a design.
Update: The cleaner data from the rerunning of the first Reproducibility Lab is now available, so I have resumed my analysis of it. I’ll post any big conclusions here in the forums. But I’m making my notes in a Google doc. Feel free to look over my shoulder and post your own thoughts.
I recommend that the color coding (legends) of the charts in your Google doc be exactly the same to make comparisons less difficult. As I read them they are different for some series the same for others.
Done. I had updated them in my spreadsheet, but not the screen capture in the Google doc. Thanks.