Crazy idea of the day: PQVF gotos

Ok, so I’m working on color display support. A good chunk of the code is written, and I’m creating a new file format which I’m calling PQVF. (ProffieOS Quite Ok Video File) It’s based on QOIF, and allows the videos to be slightly compressed with no loss in quality.

A PQVF file is just a bunch of compressed images after each other, with a small header in the beginning to specify the frame rate.

However, once I started thinking about looping, the fun begain…

I thought; why don’t I just have a tag in the file that like a “goto” which just jumps to another point in the file. If you want a looping file, you just put a goto at the end of the file to go back to the beginning again.

My next idea was to make these gotos conditional. You could have a 4-byte word which specifies what condition is used. That way you could do something like a loop which ends when lockup ends, or a loop which ends when the saber is off. Kind of like IF ISON GOTO nnnn where “nnnn” is a position in the file.

Ok, so that’s cool, now we can make choices and loops based on conditions, that opens up some interesting possibilities. But what about something like a battery meter, a bullet counter, or a VU meter?

Well, there are multiple ways to implement this, and I’m not actually sure which is better yet. One would be to replace the IF statement with a more switch-like statement, kind of like:

SWITCH BATT GOTO AAAA, BBBB, CCCC, DDDD

This would goto AAAA if batt is less than 25%, BBBB if it’s 25-50, CCCC if it’s 50-75, and DDDD iotherwise.

Another way that might be more self-explanatory would be to keep using IF statements, but include a range, like:

IF BATT IN 0, 25 GOTO AAAA
IF BATT IN 25,50 GOTO BBBB
IF BATT IN 50,75 GOTO CCCC
IF BATT IN 75,100 GOTO DDDD

I think maybe this last way of doing it is easier to understand, but it’s possible that it won’t be quite as fast if there is a lot of if statements involved.

Anyways, this is just me thinking out loud a bit, hopefully we don’t end up with a full BASIC interpreter in ProffieOS… :slight_smile:

3 Likes

I love the idea, but it would seem to me that you are about to put a style in the video format. I would tend to think if separating the logic from the assets wouldn’t be more logical. After all, you will need to make many of those decisions based on user inputs.
Making styles in a sort of XML-like file that is later compiled and uploaded, would be a nice initial step for usability. May be not full style, but something like the management of the assets.

It doesn’t seem very clear to me what is a “style” and what is an “asset” in this case.

For one thing, the original asset is almost certainly not a PQV file anyways. It’s probably a higher resolution and higher quality set of videos or images that you don’t actually keep on your SD card. My assumption is that you will have a script (or maybe an XML file) which specifies how to generate the PQV file from assets, which ultimately sounds very similar to what you are suggesting.

Color displays come in a large variety of sizes, so my expectation is that sharing PQV files won’t be all that helpful, instead it will be better to share a zip file which contains a script that you run, and it asks what resolution your screen is and then generates the PQV file.

Well, you know I have a bunch of color LCDs just to test all this stuff when you have something for testing. So I think we though along similar lines. What I thought of a “style” is probably best described as a prop.
Thinking about something like a tricorder programming or a communicator or something like that. So, I was thinking that everything that is “interaction” (reacting to inputs, conditions and such) is more properly a “prop”, while what is displayed or transmitted is a “style”. In other words, you probably want a keystroke for checking the atmosphere, and then, with a certain probability, detect a breathable atmosphere, not enough atmosphere os some sort of toxin. Just making up an example. All that seems prop mechanics to me. While the actual videos more like assets, just like powerpn.wav or clash01.bmpare. In fact, I would assume you could put each resolution in it’s own directory, very much like the altXX works.

I think there needs to be multiple levels of abstraction though, because there is some things that the prop shouldn’t have to know, things like, is the file looped? How many frames does it have? Frame rate, etc.

I’m currently thinking of it in terms of three levels of abstraction:

  1. the prop, which translates buttons into actions, like “show the battery meter” or “activate mode one” (which could be the atmosphere survey thing.)
  2. the controller. For blades, this is the style, for sounds this is hybrid_font.h, for displays, this used to be hard-coded, but I’m actually making it so that it can be swapped out for something else. The controller is essentially responsible for selecting the right file to play based on current state end events, and possibly other things, like rendering bullet counts.
  3. The file. Ideally I want the file to be somewhat standalone, and not require a bunch of configurations and changes in the code to work. The file shouldn’t have a lot of advanced configuration in it, but it would be kind of cool to be able to have files that can do random loops, VU meters or battery meters.

I like the idea of having sub-directories for different resolutions though. I’ll have to see if I can implement that.

I have been struggling with lockup and blaster.h, since I want to count shots, get the amount of shots done between bagin and end, get the option to interrupt the lockup and such. I some of that functionality was embedded into the file, boy it will be crazy to redo things. Of course things regarding the display (like desired FPS, is able to loop, etc.) is something that would be nice to abstract. But I rather see somesthing like:

if (somevideo.canloop) {
  while (scanning_mode) {
     hybrid_video[&maindisplay].Loop(&somevideo)
  } 
}

Or something like that.

I mean obviously I don’t want to make anything harder, but I don’t understand what it is you’re trying to do, so it’s hard for me to say how it would fit in.

Lockup and bullet counts are all controlled by the prop, so the prop can do whatever it wants with those. Getting that information to the output device (the display) in the right way may be sort of interesting though. In our current setup, that has to be done with a custom display controller, which is pretty painful.

I’ve been thinking about this quite a lot of the last few days, and I think these gotos are still a good thing, but there are some questions about how to control what they do and have access to that I’m still trying to figure out.

Let me start with some of the problems I’m trying to solve:

Looping animations

Currently, looping animations are made by making a tall image, which is kind of silly. Why should a looping animation have a different format than a non-looping one? Why should you use different tools to make them?

Having a goto at the end of the file seems like a better way to make a looping animation.

How long to show an image or animation

So, currently we have different rules for looping and non-looping animations. Non-looping animations are played until the end, unless interrupted by something else, while looping animations have a time limit, and if that time limit is -1, we use the length of the corresponding sound effect. The time limit is stored in a separate config file, but cannot be specified on a per-file basis.

For now, I have not figure out a better way to do this, still pondering.

Customizing battery meters

Today, the battery meter is rendered by code. It’s made up of a repeating series of glpys. The glyph itself is easy to switch out, but you can’t do anything cool with it. It would be much better if battery meters was a set of images, or an animation, and we would then choose the right image (or animation) based on the current battery level to show.

Some version of these if statements can make that possible. It’s also possible to do some animations and transitions and stuff.

Bullet Counters

So, we have a way to do bullet counts using custom controllers. It is however not particularly user friendly. In addition, It’s simple glyph rendering, so if you wanted to do something like a bar graph or something you would need to re-code it.

My thinking is that bullet counters should ALSO be animations, just like the battery meter. That way they are fully customizable. However, it may get a little silly if the bullet counts are high.

Now, I was thinking that if you have a phaser, you could want to take a battery meter animation and use that as your “bullet counter”. I can think of at least two ways to accomplish this:

  1. I can write a tool that “disassembles” a PQV file into a script and a bunch of images. Then you can modify the script and re-assmble it to do what you want.
  2. I’m thinking that the animation code could call prop.getVariable(variableName) to evaluate the conditions. This way the prop could have total control over the values returned. So the prop would just have to do something like:
int getVariable(uint32_t variableName) {
  if (varaibleName == PO_fourcc("BATT")) {
    return bulletCount * 100 / MAX_BULLETS;
  }
  return PropBase::getVariable(variableName)
}

Of course, this would make it impossible to have the display actually show the battery level, so it might not be that great of a solution…

Transitions

This one is tricky. Today, when an event occur, we generally immediately switch to the animation for that event. The fact that the resolution is very low helps hide the weirdness this sometimes causes in animations. It would be nice if we could somehow have better transitions between effects on the display.

Unfortunately, I don’t actually have solution for this yet. One potential option would be to use the gotos and let them handle the transition from one loop to another inside the file itself. However, this quickly gets complicated. Also, it makes it hard to customize things by just moving files around. Also, we may need to know if the file is handling it or if the controller is supposed to switch to a new file.

Animations AND Bullet Counts / Battery Meters

In some cases, it would be super cool to have a counter, or a bar, then have an animation running in the background. While it’s theoretically possible to do that with just gotos, we would need N x M frames to do it, where N is the number of frames in the animation, and M is the number of frames in the bar or bullet counter. While that might work in some cases, it will quickly become impossible if N or M is not a fairly small number.

Now, the only solution I can come up with for doing this is to have layers for our animations, just like we have for styles. The base layer would be the animation, and the second layer would be the counter, or the bar. Sounds simple, but implementing it is another matter. The biggest problem is that alpha blending is a semi-expensive operation, and the CPU we have is not very fast… But then there is also the weirdness of having two different layers to control. When an event happens, do we swap out the top animation, the bottom animation or both? Who or what decides? Not sure…

I think for now I’m going to leave this as a future thing, even if it would be cool.

Fulll-on GUI systems

Up until now I’ve been thinking that menu systems would have to be coded into the prop, and that a special controller would have to be made to support it. However, after thinking more about it, it occured to me that these conditions could easily be made into a menu system. All we would need is a way to make conditions for button presses, AND a way report actions back to the prop.

The prop would need to translate button presses into events, this could either look like:

  SaberBase::DoEvent(GUI_BUTTON_PRESS, 1);

or maybe it would be done through getvariable, something like:

int getVariable(uint32_t variableName) {
  if (varaibleName == last_button_) {
      last_button_ = 0;
      return 1;
  }
  return PropBase::getVariable(variableName)
}

Now, in addition to the GOTOs, there would have to be something similar to TrDoEffect, or maybe something that runs a command so that the menu system can tell the prop to DO something when a menu entry is selected. A menu script could end up looking something like this:

m1s1:
import "menu1_selection1.png"
if SELE goto select_m1s1
if DOWN goto m1s2:
goto m1s1
m1s2:
import "menu1_selection2.[ng"
if SELE goto select_m1s2
if UP goto m1s1
goto m1s2
select_m1s1:
run "toggle setting1"
import "menu1_selection1_selected.mp4"
goto m1s1
select_m1s2:
run "toggle setting2"
import "menu1_selection2_selected.mp4"
goto m1s2

This would then get compiled into one PQV file.
The menues could also have little animations in them, but the scripts for that would be complicated.

Again, having multiple layers would make menu systems like this very very cool, just like it would be for bullet counts and the like. But even without that, it would allow for some fairly nice and customizable stuff.

Anyways, I’m sure there is a ton of stuff I’m forgetting, but it’s getting late and my brain wants to stop thinking now.

1 Like

I’m thinking that a natural way of doing the display part, would be to have individual layers. And be able to loop or control the image displayed on each layer. If we can have an image format with multiple images that are indexed for easy jumping, we could just position (may be rotate) and select the image index in each layer.
So, you could have a background layer, then a battery layer, then an energy level layer, then a bullet counter layer, a heat layer, etc. For each layer you open a file and move the displayed image within the file according to a rule defined in the prop.
If you have multiple displays, you just instate a new base display. Am I getting the idea across?

As I wrote above, layers are expensive. The alpha blending itself takes at least 15 cycles per pixel, plus the cost of actually loading the layers. At 128*160@30Hz, alpha-blending two layers takes up 10% of the available cycles on a proffieboard. Supporting a whole bunch of layers would be impossible.

Building a system that depends on alpha blending also means that larger display would have to run at low frame rates, which wouldn’t look very good. On a Teensy4 (running at 600Mhz) layers would work great.

This makes it impossible to customize for most people

Yes, but your good ideas are not feasible, and tying everything to the prop seems like a bad idea.

Ok, I think I’m seeing the issues. Now, I just need some piece of information: can you use multiple files if instead of doing alpha blending, you just define sections? Like using a background image, but then defining rectangular sections that are whole files?
Because if we wanted to have more than one sensor (ammo, heat, energy, whatever) in the same display, I don’t see much more options.
If that was reasonably fast, then we could define a background image, use images that state if they are a simple animation or fractional animation (i.e. they display and increase or decrease proportionally to the number of frames they contain), and have a tool that can do alpha-blending once, so the animation is rendered to a new file over the background image.
Then, you could have some form of XML (or other config), which defines the background, all the “variable” sections, and then move them according to certain standarly exported rules.
Say you define AMMO_COUNT, AMMO_LEFT, POWER_LEVEL, HEAT_LEVEL, BATTERY_LEVEL, MODE_TEXT, etc. So you can say something like:


BACKGROUND = file1.bmp
AMMO_COUNT = file2.pqv
AMMO_AREA = (87, 45) // this would be the x,y coordinates where the file2.pqv starts rendering)

Another question about performance: you mentioned alpha blending, but what about transparency? Say in a 256 pallet 255 is transparent. So you either take the one value or the other. You don’t have to mix, just compare. I know that by taking “areas” from different files it should be faster. But still, may be it’s “acceptable”.

Both kind of binary transparency are probably fast enough to work.
We could for instance dedicate the lowest green bit the transparency bit.

Using rectangles is probably a bit more efficient, because it means we don’t have to do anything for the pixels outside of the rectangle.

Using a single transparency bit per pixel is more flexible, because it means you can do whatever shape you want to, but it’s never going to look great without alpha blending.

One thing that has occurred to me is that it may be possible to optimize alpha blending. Most of the pixels are going to be fully transparent or fully opaque after all, and both of those cases can be optimized. A run of transparent pixels just means doing nothing, and a run of opaque pixels means that we can just copy them directly. Only the edge pixels would need the actual alpha blending formula.

Regardless of how it’s done, these layers will add a lot of complexity to the code, so I might still not do it right away…

1 Like

I was thinking that a “fast” way of doing this, which would not be super nice, but should be pretty fast and flexible, which would allow for a low complexity initial concept. The idea would be to use a full color background with many monochrome layers. If you just assign a color value to the monochrome animations and have addressable frames, you could have dials, levels and counters which would look good enough.
The effort should be left to the graphic design of the combination. Even better, if the color of the monochrome image would be left to the prop or image description, you could change the energy dial color (e.g. green when full, yellow when halved, red when just a quarter left or go from white to red when the round counter is low). So you would not have to program more than an iteration of XORs, and the installer would be able to make a multi variable screen with color cues. I think it would get 90% of the utility with 20% of the OS programming complexity.

Unfortunately I think your percentages are a little off.
I think XOR with color backgrounds might solve most problems, but it would do it in a way that doesn’t look very good, so I will have to subtract some points for that, so maybe 75% instead of 90%?

Also, programming-wise it’s almost the same, so it will take 90% of the work.

My current thinking is that I will implement something that can do full alpha blending. I hope I can optimize it well enough to work for your purposes. However, I want to make it so that it’s still possible to use it with a single layer to support the largest possible screens.

Speaking of “screens”, we still have the problem of defining which layer does what, when and how… At first I was thinking that there would be a separate controller for each layer, but I’m not sure that makes sense since you would almost always want to change all the layers when something happens. It’s more like you would want to define a “screen”, which specifies what all the layers are, and then we would use events to switch between screens.

Although I suppose it could be kind of cool to keep the background animation going at all times and just switch the foreground animation when something happens.

Ok, so it’s been a busy summer, I went to Japan, my brother came to visit and I went to Victoria BC. The whole time I’ve been puttering and thinking about how I want the color display support to work.

Here is what I’m thinking right now.

  • When you declare the display, you also have to declare how many layers you want to support. Extra layers generally won’t hurt much, but will take up some memory.
  • The base layer should be opaque, and the layers on top of it should have alpha values. Fully transparent pixels will be fast. Fully opaque pixels will be fast, semi-transparent pixels will be much slower and should be used sparingly. The idea here is that in most cases, the semi-transparent pixels will only be used in the outline of the layer, so that will be a fairly small number of pixels.
  • Each layer plays from its own file. Each file can be a static image, an animation, or something that selects which image to show based on a value, like a battery meter, or a volume meter.
  • A stack of layers together will be called a “screen”, and will be defined by a *.SCR file. Each SCR file is essentially a config file which specifies what each layer should do, it could look something like this:
layer
file=background.pqf
layer
file=batterymeter.pqf
A=battery

The A=battery would tell the layer that variable A should be taken from the battery. If you used A=volume instead, you would have a VU meter. I’m thinking that I would support up to five variables (ABCDE), but obviously the vast majority of files would only use one variable.

The SCR files would be tied to an EFFECT, similar to how BMP files work for OLED displays. Also, each layer in the SCR file will have some variables that specifies how long to play something, if a file should be restarted if it’s already playing the right file, and other details. This should take the place of the variables that we have in the font config file right now.

The batterymeter.pqf file would use the GOTO idea to select which image to show based on the A variable. The batterymeter.pqf file would be generated with a tool, and that tool would have an input file that could look like:

20:
image "pct20.png"
if 'A<20' goto 20
40:
image "pct40.png"
if 'A<20' goto 20
if 'A<40' goto 40
60:
image "pct60.png"
if 'A<40' goto 40
if 'A<60' goto 60
80:
image "pct80.png"
if 'A<60' goto 60
if 'A<80' goto 80
100:
image "pct100.png"
if 'A<80' goto 80
goto 100

A 100-scale battery meter would obviously have much longer config file, but a short script could generate it…

I’m also thinking of including a way to have PQF files respond to button presses and generate events back to the prop, that way it becomes possible to implement actual menu systems directly in PQF files.

1 Like