Project Mix: Fireworks Test

Project Mix: Fireworks Test

150 Nodes all playing respectively to their connection timings.

So, I had an idea. I wanted to make fireworks as a way to celebrate the 4th of july. And I figured lets try to do it with project mix.

What you see in the video is 150 Connected nodes. Each node has their own note and sound settings.

What a node looks like up close
The looks of the actual sound settings

All of these are defined by presets in which I manually create by importing a wav file from FL-studio with each octave I want to have available to use. Like I’ve talked about in the past with the post about audio visualization using the baked analysis that unreal just added, there are only a handful of actual sound files which makes my life so much easier when it comes to quick importing and management. However, I don’t currently have support for just.. One-Shot type sounds.

FL-studio midi used for exporting sounds.

I tried to take an approach to create this that is a bit additive. I started with the basic kick to get the loop that I wanted to use, layered in the snares and high hats, and then went to town emulating the “firework” feel I was going for. This meant that I needed to create a few buttons to help make things faster like Randomize Notes, Randomly place in a spherical pattern, and selection helpers.

The buttons at the top are the examples of added usability.
In the end… it looks like this hellstorm of madness. There’s gotta be a better way.

Right now there is NO MIXING setup at all. This is about as raw as audio playback can get in unreal. Just play the damn sound… This is also why certain parts might sound a bit too loud or soft. So the next phase of development for this system is pretty clear. Project Mix.. needs.. well.. a MIX of some sort. But that’s not the most worrying thing to me. I think the biggest issue with development right now is that I have to hit play.. wait for the loop to start… and hope that it sounds the way I want. This makes things extremely slow and aggravating to work with. So, I think I am gonna take an interesting step forward into editor-based nodes that actually play .. while I am tweaking them (Obviously with some control on when to play). Imagine kinda like planet coaster, where the roller-coaster is running along the track you are currently building. This is something that I think is needed in order to not rip my hair out creating the next scene and ultimately.. the final vision for this project ( Which I am currently keeping to myself ).

Then… well there is the time synth.. which aims to solve all of my synchronization problems… so I gotta get going on that too. I did watch dan’s amazing overview of it and I think it has a ton of potential with one fatal flaw… time synth components are made to not be used spatially… 150 times in a scene. Therefore.. I’d have to tell the nodes to call the master time synth in order to play things on time efficiently. Maybe if I do that I can convince the awesome audio folks to help me out with getting 3D sound to work properly per node.

So to finish it off.. here is a few bullet points for the future.

  • Better Editor Development workflow
  • An actual soundclass and submix setup.
  • Update the core audio system to use the time synth component instead of my jenky synchronization master blueprint.

Those three are enough to keep me busy for awhile. On that note, have a great day <3

Using Unreal 4.22’s Baked Spectral Analysis

Using Unreal 4.22’s Baked Spectral Analysis

Hi, welcome to the first post on my website. This is pretty rad woop!

First off, what is Project Mix? 

I’ve been exploring with an interactive node-based visual audio simulation in UE4 purely as a way to dive deep into the audio engine’s potential and to try and get something that I think will be really interesting fun to mess with in my free time. The video below shows where it has been sitting for the last month since I messed with it last. Its got a ton of work left with it, but I think it is starting to show some interesting promise.

Early Test

More Refined Testing

4.22 Added a pretty amazing feature to unreal’s audio engine.

Baked Spectral Analysis Curves and Envelopes on Sound Waves

Sound waves can now be pre-analyzed for envelope and spectral energy to drive Blueprints during playback. This allows sound designers to create compelling audio-driven systems, while offloading the spectral analysis work to improve runtime performance. In addition, analysis data from a proxy sound wave can be substituted for a sound wave’s analysis data, allowing designers to spoof isolated sound events when trying to drive gameplay.

Every .wav file now has settings on it for being able to analyze the data and output it as something that could potentially be used for gameplay / animation techniques. Well.. turns out I might have a use for both of those things. For now I wanted to see how it works and how easy it was to implement.

Setup

Here are some settings I am experimenting with inside of the wav file uasset.

Boring Explaination of why I chose these numbers. 
Everything is pretty much maxed out and from what I could tell the uasset does not appear to gain size by much when making these values very large. Honestly, they are all undocumented so I have no idea if cranking up these values or adding more and more frequencies to analyze is a horrible thing. I imagine it would be, but right now ¯_(ツ)_/¯

The biggest ones to pay attention to are “Frequencies to Analyze” and “Frame Size”. I am guessing that essentially if I have a 2048 FFTSize and a 4096 frame size then I essentially am allowing 2 frames per section of data? Maybe? WHo knows? WHAT I DO KNOW is that the frequencies to analyze actually do matter and it will do its best later on to try and match or lerp between the correct frequencies if you are trying to say grab something in the 300hz range but there is only data for 100hz and 500hz. (That was actually some nicely commented code under the hood. Thanks Epic!)

Blueprint Time. (Its actually not that bad)  

Ok.. this part actually isn’t as hard as it might seem. Really this was super easy overall, just had to find the right nodes.

GetCookedFFTData is the magic node we are looking for and you get it directly from ANY AUDIO COMPONENT. It’s just built right in now which is fantastic. You hook this up to Event tick and you’ll get the Magnitude of the frequency you want whenever that sound is told to play. Since its directly inside of an audio component that also means that you don’t have to sync data up… I am .. really impressed it has this much integration to be honest. I was expecting like some back alley bridge of connectivity between 3 different systems.

Results

With a little bit of a lerp on the magnitude to help smooth out some jitters, the result is not bad at all and I am rolling forward with this as a part of my sound player base class to be able to easily animate things with that data. Essentially now I get animation curves … for free? Hopefully the max settings thing doesn’t bite me later on. To test it I just replaced what used to be 3 animation curves ( One for attack, sustain, and release ) and now have just the straight up data from from the analysis. Which I think is pretty awesome. I am gonna further explore the capabilities of this as I harden a few more of the core systems for this thing. You should be able to notice the animations are much more accurate to the sound you actually hear.

I am gonna keep going. If anyone has any questions / ideas / feedback I am all ears! But that’s all I got for now. I am gonna try to post regularly but … who knows? Just trying it out.

Have a good day
<3 Zuko