My latest crazy idea is to put a bytecode interpreter in ProffieOS.
This would make it easy to load code from SD or serial connections. I’m not sure if it would be fast enough to run things like styles, but it would certainly be possible to implement props, scripts and other weird hacks this way. It would also mean that some optional features could be moved out of ProffieOS and into blobs that could live on the SD card instead, potentially saving memory.
One such byte code interpreter is Q3VM, which has been around a LONG time and was originally made for Quake 3. It has a compiler already, and the input is C code, so many things would work similar to how they do in proffieOS.
If the API is carefully designed, the compiled scripts would be fairly portable and long-lived, so they could be exchanged between users in similar way that fonts are.
While Q3VM looks awfully tempting, I think I should investigate loading native ARM code first. While it might be more difficult to implement, the speed benefits are significant, and it would also allow for C++, which means that the code would be exactly like what you would write in ProffieOS, and it would definitely work for styles (as long as the code fits in RAM.)
Something akin to shaders for sound and visual styles? The visual styles are already procedural, it’s “merely” (ahem) a matter of mapping the configuration to code. Sound is sampled rather than synthesized, but shaders allow samplers as well (texture images in graphics), and procedural sound is definitely a thing in synth communities. SuperCollider might be a good source of inspiration here.
My custom always-in-beta saber is run by an Arduino Nano 33 IoT and uses Perlin noise (my own implementation) for sound and flicker alike. It’s doesn’t sound nearly as good as sampled sounds (yet), but it gets by with very little memory and without an SD card.
The problem with synthesized sound is that it is hard to customize.
When I started making saber stuff, what I really wanted was for the sound to be more dynamic and responsive, so I started with synthesized sounds. While my synthesizer could manage a decent lightsaber hum, it was absolutely no match for the smoothswing algorithm that thexter came up with, and by using sampled sounds, custom fonts lets you have a LOT of different sounds. (Although we still have some work to do to support alternate sound swing models.)
Synthesized sound is hard to do well, agreed, but there’s a lot of people out there who enjoy fiddling with software synthesizers, including the code-oriented SuperCollider.
I wasn’t suggesting making anything like that the only sound mode, or even the main mode, but it could be a fun option. It would get rid of a lot of problems with mixing, looping and time-stretching of samples, but certainly replace them with different problems.
My procedural approach is a side effect of my job. It’s not the convenient choice, I know, I just find it more enjoyable.
I do have the skills for writing sound and visual “shaders”, but zero clue on how to integrate a VM into the code base…
I’m more of a myopic “inner loop coder”, not a software architect with the big picture perspective required here.