Method of assigning points given for player puzzles based on difficulty?

Some puzzles are much harder than other puzzles. For example, many of Brourd’s puzzles are much harder than some puzzles that “auto-solve” themselves. I wonder if there could be a way to let harder player puzzles give more points to the players who solve them, much like the Challenge puzzles. I understand that this could be exploited. This is just a thought I came up with. Feedback is welcome!

EDIT: I have no idea how this would work; any ideas?

The problem I think is you don’t know a difficult puzzle until it is played. I wrote
a script that ranks puzzle difficulty by log of # solvers. So puzzles points would range from 100 to 300. Problem there is #solvers is slow to access and a moving
target, The program would have to run 24/7 to keep ranks current. Looking at Jnicol’s excellent spreadsheet shows ranks would not change very much, at least
from the way I was looking at it.

Why not use the prediction bots and base it on ‘runtime per puzzle’ / nucleotides?
I don’t mean that measurement precisely as that wouldn’t be a fair way to determine difficulty, other factors can be used in the equation e.g. free energy. But people with more in-depth knowledge of the programs would be able to help with the specific algorithm to apply.

The downside to that method is it would delay puzzles from being available to players until the bot assessed it and assigned the puzzles value; which might annoy some people. A solution to that would be if an option was added to puzzlemaker to allow you to choose between publishing the puzzle immediately and having the default 100 value or sending it to the bot to assess and, possibly, give it a higher value.

The problem with this idea is that there aren’t many bots that play by all the rules (constraints), and they wouldn’t be able to determine even roughly the difficulty for the humans who are going to face that puzzle.

Well that just goes to show my lack of knowledge about the EteRNA internals. :stuck_out_tongue:
I haven’t delved into Vienna and the like yet (on the to-do list after if I finish the browser extensions), so my thoughts are basically conjecture held together with assumptions. I thought I had seen an EteRNA bot running on a puzzle, but now that I check, it seems I had just seen the individual program bots e.g. Vienna, Designer.

And I agree that the raw runtime from a bot would be insufficient to determine difficulty, what I was trying to say was that it would provide the raw number which would then be factored by a human made ruleset of difficulty based upon the puzzle’s structure and constraints.

Now that I’ve typed all this I’ve realised what the system I’m thinking about resembles and it is useful as an analogy; a SpamAssassin type scoring ruleset for the puzzles. But as you’ve pointed out, if there isn’t already a bot that can already handle the puzzle’s various settings then this isn’t really feasible and I’m just typing lots of text purely as a mental exercise. (╯°□°)╯( ┻━┻

Yeah, I have no idea how this’d work. It’s just an idea I had.

I think this would be nice.