Eterna Performance/Know-how

Hi!

Recently I forced back to my old pc(Phenom II x4 @3.6Gh(after restoring roughly 1/3 pins bent), 8Gb ram, 1050 TI) due to warranty issues for my current rig and even 15nt puzzles are extremely taxing, in fullscreen window getting no more than 1-2 fps, eating up cpu according to “task manager”…

This made me wonder how exactly Eterna works behind the scenes(where the computing takes place, got any preferred browser), what resources it really uses/likes when supported by a more “recent” configuration, does it utilize more cores or likes single core performance, etc.

Ty in advance!

That seems really bizarre! A puzzle that small should have no issues, and that hardware sounds relatively good. I’m going to assume you don’t have other things running (if you did, possibly worth verifying that none of those are causing issues). Beyond ye olde “turn it off and back on again and see if it helps”, the main thing that comes to mind is ensuring hardware acceleration is enabled in your browser - if not, this means that rendering is happening on your CPU instead of your GPU, which could bog things down (though I’d be surprised to see it be this bad).

Verifying this will depend on your browser - on Chrome you can go to chrome://gpu and see what’s listed next to “WebGL” and “WebGL2” under " Graphics Feature Status" (both should say “Hardware accelerated”), and in Firefox you can go to about:support and look at the “Compositing” field under “Graphics” (I think it should just say “WebRender”, though I’m not 100% sure if this is the same on different systems).

As far as answering your later questions directly: There’s no preferred browser, other than it should be relatively recent/up to date (no IE support :slight_smile:). It will likely only take advantage of a single core (due to a combination of how browsers work, how the code is architected, and just the type of operations we have to perform). As mentioned before, it will also take advantage of your GPU.

Running a small, basic puzzle in Chrome, the entire browser uses a couple hundred MBs of RAM (most of which is just Chrome’s baseline) and <10% CPU (I have an i7-8750H). This of course changes significantly with the size of the puzzle - recent labs with a couple thousand bases are much more resource intensive - though we’ve recently done a fair bit of work to improve performance (it’s waaaay better than it was before this summer). If it’s of any interest, to give a bit more context, there’s two main areas that impact performance:

  • Rendering, that is, displaying the graphics on the screen. Issues here would be noticed when just letting the puzzle idle, zooming, dragging, changing from target to natural mode, etc. There’s two parts to this.
    The first one is figuring out what goes on the screen where - most notably, this happens whenever the structure changes (eg, changing between target and natural mode, or if you change a base that alters the natural structure while in natural mode), the positions of all the bases have to be recomputed (and, when using our new annotations feature, the locations of annotations also need to be updated). This is what we have the most control over, and where we’ve had the most issues in the past.
    The other aspect to this is actually drawing the objects on the screen - this is mostly handled by the library we use (PixiJS), which wraps an underlying technology called WebGL, which is what takes the images of the bases, instructions for drawing basic shapes like the base rings, etc. and actually turns that into the final image - this is the bit where the GPU is important.
  • Folding, that is, taking your sequence and predicting the natural mode structure. Issues here would be noticed particularly whenever you mutate a base. We rely on software written by other research groups to do this part.
    The folding engines/energy models are written in C/C++, which we then compile to WebAssembly, which is basic optimized instructions that the browser can interpret (similar to an application you might run directly on your desktop) - this is also typically much faster than JavaScript.
    Here we’re very much limited by the efficiency of the algorithms - conventional packages like Vienna, NUPACK, and Contrafold/Eternafold use algorithms that are exponentially slower the longer your RNA is - this is due to the nature of the problem (it essentially has to try every combination of bases!). LinearFold is special in this regard - it doesn’t slow down as quickly (specifically, in linear time rather than exponential) as the length of the RNA grows. It does this by basically making some decent guesses that allow it to throw out/not check most of the options.
    These algorithms also can’t (or at least, can’t easily) be made parallel, so even if we could take advantage of multiple cores, or a GPU, or a ton of servers in the cloud, it currently can’t be made to run much faster just by throwing more resources at it - it’s mostly limited to the speed of your CPU.

I’ve thrown a bunch of technical jabber in there largely for fun, but do let me know if you have any other questions or if I can clarify further. :slight_smile:

Thanks!

Never tought hardware acceleration is off(and that I restored the pins into a fully working manner), whole system and everything is a fresh install, weird…

And wow, the answer(s) pleases even the gods themselves i guess :slight_smile:
Only QoL question i have, is it possible to render the changes relative to our “focus” on the puzzles? Like the base we mutated stays where it was on-screen, or our “focus” shifts to that base after rendering is done.

It’s certainly come up before, but we haven’t had the chance to look into it. If you don’t mind, do a quick search of the forum - if you see a relevant topic, like/comment on it as appropriate, otherwise make a new post in the feature suggestions category to make sure we have it recorded in our backlog!