So far, I like the new lab a lot - I’m not being too critical yet, as I know implementation is still in progress, but I do agree with Berex’s #5 request above - I would love to be able to see the ELO rankings, and how they are progressing, comparison-by comparison in real-time, just like we see the regular votes change the rankings in real-time, so we can see the results our own and others’ choices, and be able to track the ELO Progress of Designs as they bubble up or sink in the rankings - that for me is the next most important part of implementation I’d love to see right away, ahead of minor convenience items in the interface - which I am sure will be forthcoming.

Also, I have to say, I am actually finding it “FUN” to do these comparisons! And I am learning a lot from having to do detailed evaluations of designs I might not otherwise have found the time to even look at. I’m seeing how very many much more diverse designs look in the various stats and read-outs in both EteRNA and RNAFold, and that is very enlightening in itself.

So, even this first 12 hours or so have given me a very good feeling about this whole effort!

Finally, it just occurred to me that it might also be of great interest if we could see how many comparisons we ourselves - and each of our fellow players - has done, both for each round and overall.

Just a quick thought (I’m still mulling over more extensive reaction):

You might want to change the wording on the page that says “Later when both RNAs are synthesized, if you made a right pick, you’ll earn 500 points.”

The way it reads now implies that all RNAs will be synthesized, which could lead to some confusion. It might be okay if there’s an explanatory screen before playing the game about how the whole process works, but I think it’s misleading. I might say something more like “Later, if both these RNAs are chosen for synthesis and you made the right pick, you’ll receive 500 points.”

Just thinking, if there was an automated way that put it into the comments, if someone used someone else’s design as the base.

So say if I used one of Ding’s designs, its would automatically, say “Copied from Ding. 111333.” Maybe even with a synthesis score if it was from synthesized design.

Cos I’m seeing a lot of designs look like copies, but it doesnt say in the comments, who they copied it from.

we’re supposed to compare and pick from just the thumbnails?

… oh I see, click the “see all RNA” button.

hmm, …

I’d like (a) to bring up the RNA’s one click smaller, the better to compare at a glance, and (b) to bring up with the energy, melting, and maybe thumbnail plots already showing.

I like the old lab, but the new game lab is more - gamelike. Intoxicating addictive. It is actually quite funny looking at designs I would have deemed uninteressing before. Now I have to think more about why I think something will not work, which in the end will make me more consious about what will work.

Please could we somehow keep both systems? I like the way it is possible to view and compare the puzzledata in the old votingsystem. I like the ability to pick out specific desingners.

I don’t know how pairs are currently selected (I assume it’s simply a uniform random draw at this point) but the following seems to be the most efficient way to obtain a partial order > based on an underlying total order of a set P such that for all elements in a subset S of size n, every element of S is greater than every element of P \ S:

Let n be the number of elements to be chosen from the set P of m elements.
Let C be the set of c elements that have been in at least one comparison so far.
Let x = min(c, n)

If C = {}, compare two elements at random from P.
Otherwise, let T be the set of elements X in C such that there exist exactly x - 1 elements Y in C such that Y > X.
If T has exactly one element, compare that element to a randomly selected element in P \ C, or stop if C = P.
If T has more than one element, compare two of its elements at random.
If T has zero elements, compare two elements at random from the set of all X in C such that there exist exactly c - x elements Y in C such that X > Y.

Or in English:
If we haven’t compared any elements yet, compare two at random.
If there is a unique _x_th greatest compared element, compare it to a randomly selected uncompared element, or stop if every element has been compared.
If there are multiple _x_th greatest compared elements, compare two of them to each other at random.
If there are no _x_th greatest compared elements, compare two prospective unique xth greatest elements at random from among the elements already compared so far.

I do like this method of ranking designs. In the old method, It was difficult to analyze all of the submissions. So , realistically, I only considered a few from a culled list. The criteria for culling the list was certainly flawed, so a good design could easily be overlooked. This method insures that each submitted design gets some consideration from somebody.

If this is where we will go via the “VOTE” button on the ‘current round’ section of the lab page, then you need to add a button to to that section that will show us an overview of all submissions.

I have a slew of questions on the New Lab Elo Comparison System:

Now that Lab 104 is over…

Are the Elo Results for round lab 104 complete?

a) If so, can we see them? …and when?

b) if not complete, when will they be? …and can we see them then?

c) Also, If you are not ready to show us the results, can you at least tell us if the Elo comparisons picked the same designs as the Player Votes for lab 104? …and…

d) If the results were different than the player vote, which designs did Elo select?

Why has the Elo comparison not gone on to comparing the “Bulge Star?”

Are initial results good - or at least encouraging? … or are there unexpected problems or difficulties? …or poor or unusable results?

If there is other reason for withholding results, may we be told what that is? Perhaps the devs are just swamped and overloaded with so much to do?

Please forgive my somewhat impatient excitement, but I am bursting with curiosity.

Would you be so kind as to bring us into the loop and let us know what is going on? I believe we are ALL extremely curious to find out.

Looks good so far. However, one weakness of this new system is that it still doesn’t resolve the issue where *both* designs being picked are bad-- for example, when one is a Christmas tree while the other consists of identical stacks with (alternating AUs + one GC pair at the end) and no loop stabilization.

So, would it be possible to introduce the “rating” idea more explicitly by allowing people to directly rate each design on a 0-to-100 scale? One advantage of this system is that points can be rewarded based on the similarity between the user-predicted score and the actual synthesis score, such as (Points Awarded) = max [0, 500 - 10 * abs (predicted synthesis score - actual synthesis score)]. If needed, a bonus could also be given for predicting the synthesis score perfectly.

Some information on the structures, CG etc numbers, energy and melting point would be good on the front page. Being able to toggle that with the larger development view easily would good.
Comparison idea is good, I like it; it gives you something to compare you’re opinions to. Like infjamc says, if there was an easy way to rate the designs, though something simple like a 1-5. 1 is unviable, 5 is stable and viable. Then the designs with the highest average near to 5 would be selected, rather than just likes. Points could then be given on how accurate people’s assessments were.
If the scores were averaged then it wouldnt matter how long a design had been posted to it’s rating if it’s good.

There should also be a requirement on the minimum number of votes required before a design could be selected, though. (Possible formulations might include “a design must be given at least half the number of ratings as the design that has received the most number of ratings” or “a design must be in the top 50 percentile in terms of number of ratings given,” for example.) Otherwise, someone could submit a design at the last minute and give the maximum rating to their own design as a means to get the highest average possible, which would totally defeat the purpose of simply averaging the ratings.

My strategy is to skip comparisons when I can’t see a clear advantage of one design over another (i.e. both are really bad or both are top contenders). This should allow for maximum useful information to go into the rankings if everyone treats it that way.

This is a very slick implementation, kudos to the developers!
I find the side-by-side comparison extremely useful, would be nice if we can compare pairs of entries this way regardless of what voting/ranking system is ultimately used. For reviews, they should still be randomly chosen, but for voting or just general reference, it would be nice to be able to compare two specific designs in the same window.