Use wavlen of user effects?

Playing with blade styles for the new hev prop.
Not having any luck just trying to set a transition in an EFFECT_USERn to the WavLen.
It’s a prop specified sound, but that shouldn’t matter, right?
I also tried moving the switch case EFFECT_USER1: to SB_Effect2, but still not working.
Is it an issue that the call is SOUNDQ->Play(&SFX_health); compared to plain PlayCommon()?

1 Like

Yes, Playing thing with SOUNDQ means that WavLen wont work. (Since the sound might not play until later.)

Makes perfect sense. thanks.

If you use EFFECT_TRANSITION_SOUND you can do WavLen. Transition sounds can be anything you want, they’re intended to be generic sounds for multiple uses.
Just drop TrDoEffect<…,EFFECT_TRANSITION_SOUND,N> in the beginning of the transition and then you can specify the extract sound to play N.
Then use WavLen<EFFECT_TRANSITION_SOUND> for timing after the sound is called.

Hmm thanks. Something like that might work if the prop wasn’t selecting specific files based on values, in addition to queuing up sounds that might not play until after the style code runs the user effect.

Use TrDoEffectX and the the N for selection can be a function, I have this on several Special Abilities.

Not quite sure what this means, but if you can put the timing into the style, then you could call TrDoEffect at any point in the transition, caveat being the sound has to be called before the WavLen function for it to get the actual sound value.

I appreciate the reply. Typically these would be perfectly good ways to do it.
But to summarize:
The prop allows for armor and health depletion with not only clashes but random timings from environmental effects like radiation.
The wav number selected for health coincides with the current health level, so it needs to be set by the prop.
On top of that, the warnings and sayings use sound queue so they can all be reported. This means the health level might animate in the style code when user1 happens, but the appropriate health wav might be queued up after a currently playing hazzard or something.

So… I think we’ll just have to go with the average length and hard code it.

Gotcha, yeah that’s pretty specific then, good luck with it.

One way to make something like this work would be to add a new set of effect, something like EFFECT_USERn_STEP2. When EFFECT_USERn occurs, we would also add EFFECT_USERn_STEP2 to a queue. This queue would work similar to SOUNDQ, but instead of playing sounds, it would just trigger an effect and then check the length of any sound triggered and wait that long before playing the next sound from the queue.

styles can then decide whether to use EFFECT_USERn (which are immediate effects) or EFFECT_USERn_STAGE2 which are queueued. Each effect can potentially also have their own sounds.

Cool!

I don’t get that so much… the EFFECT_USERn sound already plays queued.
Would WavLen of the STEP2 be available then because that’s the idea here so it can be used to set delays and such in style code. It sounds similar to SB_Effect vs SB_Effect2, but I’m not seeing how it’s used in

SOUNDQ is pretty helpful. I need to wrap my head around it which you think I would have by now being a sound kinda guy :frowning:
It probably could have made some workarounds in my prop unnecessary.

The problem with the current implementation is that the sound is not happening at the same time as the effect.

So the idea is to have two effects. One immediate, one delayed. Basically, we have a queue of effects instead of a queue of sounds. Both effects can in theory play sounds, both effects can trigger lights or other things. Since sounds and effects are now triggered at the same time, wavlen will work. (But USERn effects can only access the length of sounds triggered at that time, and USERn_STEP2 effects can only trigger effects that happen at that time.)

The similarity to SB_Effect2 is mostly incidental. SB_Effect and SB_Effect2 is just a way to order things so that sound is played first (in SB_Effect) and then we will know the length of those sounds when we do other things (in SB_Effect2).

However, just like SB_Effect2 knows the length of any sound played, you can also get the sound length immediately after triggering an effect, and that’s what I was suggesting before.

Instead of building a parallel queue of effects, how about just waiting to call the STEP2 until SOUNDQ actually plays it, then fetch the sound length?

  • Sound queues to SOUNDQ
  • When the sound plays, manually trigger EFFECT_USERn_STEP2
  • SB_Effect fetches the sound length
  • Blade style uses WavLen<EFFECT_USER1_STEP2>

We could add a member variable to SoundToPlay (like effect_to_trigger_), defaulting to EFFECT_NONE. Then call it like
SOUNDQ->Play(SoundToPlay(&SFX_health, EFFECT_USER1_STEP2))
but plain
SOUNDQ->Play(&SFX_health) would also still work if no STEP2 is desired.

Isn’t that exactly the same thing?

Define “manually”.

This would extend soundq to also handle effects.
But normally we map effects to sounds in SB_Effect2(), and if we do that, we won’t need to pass in SFX_health to SOUNDQ, effectively making it an effect queue instead.

SOUNDQ is already meant to be extensible, so SOUNDQ->Effect(EFFECT_USER1_STEP2); either of these ideas are definitely possible without building a new queue. Building a new queue might be more efficient if you only need one, but having them together allows for mixed use, which could also be helpful…

I submitted a proof of concept PR of how I meant.

I guess the first question we have to answer is whether we really need this. SOUNDQ is meant for spoken HUD messages and other things that can’t really say two things at the same time. (Like, a companion robot, or a GPS.) The question is, do we really need to have light effect synchronized with those things?

I could certainly see LED animations on some “mouth” accent strip or similar, or in this case health and armor level VU meter meter type readouts.
Or a spoken menu item string of prompts that maybe each could have it’s own animation and would require the sync.
If it’s not used, it doesn’t take up space right?

That is only sometimes true.
When you put two features in one class, feature B often takes up some space even if you only use feature A.