I posted about this in the “New synthesis candidates selecting system” thread, but here’s the idea in more detail:

Instead of either having a limited number of votes or having to compare designs pairwise, each player makes a prediction for the synthesis score of each design they wish to evaluate. They can get points for each design they evaluate that is synthesized, receiving more points the lower the difference between the predicted and actual synthesis score.

A simple reward formula might be something like max(125(1 - 1/*t* * |*a* - *p*|), 0) where *a* is the actual score, *p* is the prediction, and *t* is the value for |*a* - *p*| at or above which no points are given. This yields a maximum of 1000 points if all eight synthesized designs’ scores are predicted exactly–the same maximum as for the current voting reward–and 0 points if all such predictions are off by *t* or more. A reasonable value for *t* might be 25.

There are at least two options for how to use players’ predictions to pick the top eight designs:

Option 1:

Each design gets a “combined prediction” based on all player predictions for it. This could be just a simple average, or it could be a weighted average based on how accurate each player’s predictions have been in the past. The latter would prevent poorer predictors (or strategic predictors, e.g. who predict “100” for their own design and “0” for everyone else’s) from watering down the predictions of better predictors. This accuracy rating could also be treated as an additional game “score” for players to seek to improve, in addition to their regular points total.

Of all designs that have received a certain minimum number of predictions, the eight with the highest combined prediction are selected for synthesis.

Option 2:

The predictions are used to perform pairwise comparisons *automatically* for each player and the eight designs with the highest resulting Elo scores are selected for synthesis.