So I’ve been thinking about how to do transitions…
I can think of a few different ways to do transitions. One would be to use a an “if COND goto …” to jump to somewhere else in the animation. This could for instance be used to transition between an ON and an OFF state in the same animation.
Another type of transition would be to cover the blared in this scope; did you mean ‘blade_id’?ackground with another layer, swap out the background and then make the cover transparent again. This could be used to implement fade-to-white/fade-to-black type transitions, or you could make a closing hatch, which then re-opens with a different background.
Now, the first type of transition is already possible, the second one really isn’t. One possibility would be to add a “sleep” statement to SCR files. That way you could make an SCR file sort of like this:
layer
file=fadetowhite.pqf
sleep 100 # After this we start over at layer 1
file=newbackground.pqf
This would start “fadetowhite.pqf” on the second layer, sleep 100 milliseconds, then start “newbackground.pqf” on the bottom layer.
An alternative to this would be to be able to state which file to play next, and also alter the timeout of an existing layer, sort of like:
timeout=100
nextfile=newbackground.pqf
layer
file=fadetowhite.pqf
In this case, the base layer would continue playing for 100 more millis, then transition to newbackground.pqf.
Right now I think I’m leaning towards the “sleep” method since it will allow arbitrary long chains of layer updates, but it also depends a lot on how people actually want to use all this stuff…
Side-topic: I shared this thread with some Mandolorian Mercs who are working on gauntlets as well as voice controllers. Being able to integrate Proffie to reduce things down to a single controller where you have sound, interactive stuff, voice quotes, and control of your gauntlet readout or just a custom animation would be sweet. *Maybe even do helmet fan control at the same time.
Something like a micom with a display (and fan control) then?
Adding fan control to a micom sounds fairly trivial, it’s really just a matter of picking a key combination for it.
I guess the only question is: What should the display do ?
Maybe like Micom but Proffieboard based instead. As far as the display interaction is it doing some sort of Singing Birds, Rocket Pack, whatever “Ready Mode” interactive (based on buttons pressed?) thing. Just throwing the visual idea at you. As an example imagine doing a Predator type wrist piece. The prop potential goes beyond just an in-hilt display.
yeah, that’s a good point never mind having those rotor motors you can animate it.
I’ve been thinking about how to do this, and now I’m tempted to write a ray tracer… 
1 Like
This will come as a shock to nobody: I have too many things to do! 
I really want to create a virtual crystal chamber for color displays, but I really don’t have any experience with with Blender and the like. Sure, I can write a ray tracer, but there has to be a simpler way, right?
So, help? Please?
Ideally, I would want to document the process and open-source scripts and models used so that other people can make modifications and create their own virtual crystal chambers as well.
2 Likes
Something that would show blade style things in real time (colors, transitions etc…) like an addressable pixel does?
No, at least not exactly.
That is a future thing I might do, but what I’m looking for at this time that shows a picture of a spinning crystal. When the saber turns on it will light up and spin faster, and there should also be animations for clash, lockup and other events.
The tricky part is to figure out how should all work. Some effects might be in the base animation, while others would be in other layers.
Btw, the color of the crystal would be baked into the animation. And you would also need a different animation for each different screen size. This is why I really want to build some scripts that generate the animations so that people could just load it up in blender, make some changes, run the script and then get an animation that works for them.
1 Like
No takers with blender skills.
Guess I’m going to go learn how to use blender now…
1 Like
i tried that rabbit hole, once…
Only once…
Yeah, I’m starting to think that maybe writing a ray tracer from scratch might be easier after all…
Hi, I’m familiar with 3dsmax. I can modelize and animate a cristal and render pictures/video, but not sure what kind of output you need ?
That’s very cool.
In a perfect world, what I would want is a command or script that generates an image based on a few parameters, like:
- rotation of the crystal
- image resolution
- light level
Then I could write a script that can generate a virtual crystal chamber in any resolution.
However, since we don’t live in a perfect, world, what I would need is a set of PNG images, 128x80 pixels which form an animation of a rotating crystal, as if seen through a small window on a saber. I’m not sure how many images would be needed for a full rotation, but maybe 50 or so? (Depends on how fast we want it to rotate, which I don’t actually know.)
I’m thinking that I would use another set of images when the saber is on, which would have more light and maybe some spark or lightening effects. In addition, we could use a semi-transparent overlay for some effects, like clash. The overlay can also be animated in case we want to do lockup that way.
Does this answer the question, or would you need more information?
I would recommend making the images in grayscale so they can be colored by code / style.
1 Like
That’s an interesting idea.
Doing it at runtime would be pretty slow however. The system I have designed so far is written to give a lot of options for how images are chosen and stacked, but no processing is done to the actual images.
Doing it offline could work, but it seems to me that it would be better to select the color before doing the rendering than trying to apply it afterwards. Generally nothing you see in a rendering is made up of a single color, and I think a lot of nuance and possibilities would be lost if it was rendered in grayscale. For example, if you set your crystal chamber material to “brass”, then it will alter the color reflected from the crystal in ways that would be lost if it was converted to grayscale.
Perhaps a better way to do it would be to render every image with a RED, BLUE and GREEN crystal. Then you could produce just about any other color by blending between these three images.
That’s clear.
Not sure I can make an executable/script to generate images, but I can output a set of raw images that can be handled by the board app.
As say BlackDragon, it can be a greyscaled set and apply a color/brightness filters on post treatment with photoshop (or similar) or RGB sets, as you wish.
For the rotation speed, it can be set by the amount of images used. (less images = faster).
Have you some examples of what it should look like ?
RGB sets would be preferable IMHO, but it’s fine to just make one color first.
No, I do not. AFAIK, this is the first virtual crystal chamber…
However, I can try to describe what I imagine…
First of all, I imagine having one of these displays on the side of a saber or chassis:
Then a crystal chamber, maybe something like this:
The crystal chamber would be fairly shiny metal so that we see some reflections from the crystal inside it.
The crystal inside glows and rotates. While looking to do this myself I found this:
The crystal at the end of this staff could potentially work if resized to fit inside the crystal chamber.
Note that all of this is just ideas, if you have something else in mind I’m totally fine with that.
PS: It would probably better to render at a higher resolution, like maybe 1080p.
Since I’ll be writing a script to blend and convert the files anyways, that script can also do the scaling and cropping, which would be great if we want to support other sizes or resolutions in the future.
1 Like