There has been some bewilderment about the switch scores lately after the results of the Cloud Switch lab data got published. A lot of the scores did get revised, but some of them lookes just like earlier. I asked Rhiju and got an explanation. He ask us to think about how we want the switch score. Hereby I pass on the conversation between us:
News post said to mention if we found anything odd with the scored. I got really happy when I saw Hotcreek score a clean winner. But sad when I saw the scores on the two separate states. I hope it is just me reading it wrong. I know it is hard getting all bases scoring correct in the winners. This one looks exactly like the former problem with the scores.
I just took a look – I think the numerical scores are doing what they are supposed to, but, maybe that’s not great.
For hotcreek, the nucleotides that change pairing status do go in the right direction. so it gets a perfect switch score!
Obviously, as you point out the separate state scores aren’t so great. In this case, we were more interested in getting evidence of the right conformational change than making sure that every hairpin is super-solid. But you’re right that it is important to reward and discriminate designs that have the ‘whole package’ rather than somehow winning on a technicality.
Perhaps we can devise a hybrid score – something like
(1/2) * switch_score + (1/4) * shape_score + (1/4) * shape_score
and that can be the ‘final’ score though we can also present all three.
What do you and the other players think? Or is there a better scoring scheme?
This discussion is super-valuable as we will be doing more switches – some as player projects, some as expert projects – in coming months. We want the default score system to be reasonable… and even though we worked on it hard last year, this massive increase in throughput is making us revisit everything. Super cool.