Discussion Topic - Voting Strategies

EteRNA Players’ Perspective on following topics -

  1. How to choose an RNA design to vote for?
  2. Strategies used in the past voting rounds
  3. Lessons learned from the past voting rounds

Useful references from EteRNA players
a) RNA Lab: Christmas Trees, Cub Scout Projects, and Optical Illusions by dimension 9
b) Why wouldn’t I want to use all G-C bonds? by ccccc
c) What did you learn from: Lab 103 Round 1? by ccccc
d) RNA Lab Guide by mpb21

Since finally entering the LAB part of EteRNA, I have developed some issues and concerns with the voting system, and I know I’m not alone, the boards are filled with others voicing similar concerns.

Before jumping in, however I just want to insert a disclaimer: “None of the ideas below are things which I am even 100% certain I am in favor of. They are just a sampling of issues I’ve noted, and wanted to bring into the public conversation.”

I think there is a case to be made for a many alterations to the voting system, but I am going to bring up only two for the sake of this conversation: Namely, 1) Blind Tally" voting, and 2) “Blind Author” voting.

I might add right up front that I believe neither of these ideas will popular or well-received, however, I believe both could have an ultimately positive effect on the voting process itself, as well as on the results of that process - since both remove information that really has no value to add to the SIMPLE BASIC MERIT of the design submission itself.

  1. “Blind Tally” voting (not showing the amount of votes any design had received until AFTER the voting was closed) would minimize, if not entirely eliminate, the (quite natural) “pile-on” effect; voting with the crowd. Currently it is too easy to make up your mind to cast a vote using this criteria alone, and it makes votes cast this way not have the same true value as a vote cast based on one’s own individual best-effort analysis.

  2. “Blind Author” (not revealing the author of each design until AFTER the voting was closed) would be minimize or eliminate “guru-following;” voting for a design solely because it was composed by a top player. Newer players, or non-top scoring players, would then stand a much greater chance of having their design voted for or selected - assuming the design was good - since they would not be dismissed right away because they are new and unknown.

(Note: This might also necessitate a non-personal submission-naming scheme as well)

Both of these changes aim to necessitate honest personal evaluation of the design alone as criteria to cast a vote, by removing too-easy substitutes for more difficult and time-consuming analysis.

I want to stress that, even though I am advancing these ideas, even “I” myself, would miss this information being included; I LIKE seeing who did what design, and how many other have voted on it; it definitely enhances personal involvement and interest. But I keep catching myself beginning to make voting decisions BASED on these factors, before - (or even totally WITHOUT) - having taken the time to analyse each and every one, and then make my decision based on that analysis - (especially when time is running out), and I feel this pollutes the voting effort, and is bad for EteRNA and its goals.

Granted, these ideas would make the voting analysis effort MUCH more demanding of the players; it takes a lot of time and effort and thought to go through dozens of submissions, I know, and we naturally look for ways to narrow down the candidate base, so that we can concentrate our analysis efforts more profitably and economically. But shouldn’t this winnowing process be based on something like past high nature-score? …or from one’s own growing sense and intuition about chances for success derived from free energy and melting point value ranges? … or simply from how similar a design is to other successful designs? … (instead of how many others chose it, or what amount of esteem the author may command?)

Perhaps the search should be defined and promoted in earnest (among players and staff) - to come up with more appropriate and relevant, and valuable criteria for the design winnowing process, while simultaneously seeking for ways to keep these interesting and valuable - but peril-fraught - criteria like vote tally and author from affecting the voting results in unintended and perhaps even counter-productive ways.

Alternately, perhaps initial winnowing-down should even be conducted by the EteRNA staff rather than the players, as this could also reduce conflict-of-interest issues among players, as well as tap the most knowledgeable resource to influence the most likely successful starting pool of designs.

In closing I just want to say that I am not necessarily lobbying for any of these changes; rather I am just wanting to share my perceptions and concerns, and looking to advance this conversation in all the players interests, and in the interests of the goals of EteRNA itself.

This reply was created from a merged topic originally titled
Voting Solely on Design Merit - How to Implement?.

I agree with blind tally voting. Voting with the wisdom of the crowd presumably doesn’t increase the wisdom of the crowd.

I disagree with blind author voting. You want to find the ideas the current known gurus are submitting. Good gurus will explain why they have chosen a design. Betting on an unhandicapped horse that has won a lot of races seems like a reasonable strategy.

I’d like to see a value on the voting page that sums up the results of the dotplot for each design.

I’m new to the lab and found that voting coherently on lab submissions is not an easy thing for a newbie. As I’ve looked further into it, it seems that even experienced players must put in significant effort to vote intelligently. The information given in the voting table is useful, but I found myself wanting more. I was able to eliminate a few designs from the number of GC pairs and free energy/melting point, but it was difficult to decide what should guide my decision between the remaining designs. I can see why people look at the number of votes a design has received to help them decide, which adds to vote snowballing.

When trying to design a decent lab submission, I was examining the past lab results and started looking at the dotplots. From the lab results I looked at, the dotplot seemed a fairly good indicator of how many bonds might be incorrect as well as which bonds will likely be problems.

I started using dotplots as an additional tool to decide which designs to vote for, but looking at all the design dotplots is a cumbersome process. Sometimes I’d forget which design’s dotplot I had just looked at, and it was difficult to compare dotplots between designs.

From what I understand, the unwanted dots (i.e. the dots in the upper/right half of the map that do not have a corresponding dot in the lower/left half of the map) represent probabilities that the RNA will make unwanted bonds. Could all these “unwanted” probabilities be summed up to a single value? Ideally any probability of a desired bond that is less than one should also be accounted for. For example, if a desired bond has a probability of .8, that should count similar to a unwanted bond of .2 (1 - .8 = .2), whereas a desired bond of probability 1 would count as zero. A low value of the dotplot summation would represent a clean dotplot, and a high value would indicate many possible bonding errors. I’m not sure, but it may be analogous to “ensemble diversity” in RNAfold? But getting that information from RNAfold seems to be even more cumbersome than checking the dotplots.

Maybe summing the results of the dotplot and adding these values to the voting page are not easy tasks, but I believe it would lead to a widespread player appreciation for the dotplots. This would hopefully lead to a better understanding of the benefits and limitations of the dotplots. It might even help identify any flaws in the underlying assumptions, physics, and mathematics. Or maybe it would just help me vote on lab submissions. Well…hopefully not just me.

I think these ideas rock. Voting will always come down to human nature. Its not the best candidate that wins but the one who appeals to the lowest common denominator.