Niagara Grid 2D feels like a superpower! Drawing Locations To a Render Target in Unreal 5.1

Niagara Grid 2D feels like a superpower! Drawing Locations To a Render Target in Unreal 5.1

Hello folks, my name is Chris Zukowski and I like to be called Zuko for short. I have been working in the game industry for 11 years as an environment artist turned technical artist and currently am a technical director at Terrible Posture Games. However, in my free time I’ve been developing Mix Universe: A Musical Sandbox Game

The main reason for focusing on this tech was to find a quick way to create a texture at runtime that I could use for fog and potentially fluid simulations as well. Previous methods were expensive (Around 2-5ms) and used complex systems (like a camera capture actor and spawning particles only the camera can see or even tapping into a widget component’s render target and spawning circle widgets.)

After a bit of tinkering with Grid 2D and Niagara Simulation stages, the end result is the ability to draw 100’s of circles at a time with a very minimal impact to the CPU and GPU. I decided to make a video and example project showing from scratch how to make this Niagara system and use custom HLSL code to draw the circles. You don’t really need any experience with Niagara for this tutorial, and for those who know your way around, you can skip to your liking! I tried to make the tutorial as flexible as possible.

Full video tutorial is here

Example project to follow along with.

Original Tweet that that propelled me to make a tutorial. Enough folks were really interested in how it was done and I didn’t want to alienate anyone who might not have a lot of Niagara experience. Don’t be scared that there is code in there, it’s really not that bad!!

Full Breakdown (Same as Video Tutorial)

Getting Started

The example project has important blueprints that help get thing rolling without having to focus on getting data. You can download it for whatever price you feel is fair.

It is an Unreal 5.1 project, so you will need to download 5.1 or higher from the Epic Games Launcher in order to run the project.

When you open it up, there are 2 levels that you can open up.

MixGrid2D_Location_Start – Where we are going to start today.
MixGrid2D_Location_Finished – Where you could go if you don’t wanna follow along and just wanna sift through the finished setup.

Here is what we will be making today.

When you open up MixGrid2D_Location_Start, you will see this.

When hitting play, the sphere will be animating, but there will be no texture. We will be making this from scratch.

Creating the Render Target

Create a Render Target by right clicking and going to Textures->Render Target 

We will call it RT_MixGrid2D_Locations

Double click to open it and change the Address X and Address Y to Clamped.

This ensures that the texture will not continue past the edges.

Everything else by default is fine to leave as is!

Making the material.

This will be likely one of the simplest materials we will ever make. Right click and go to Material to create a new one called MM_MixGrid2D_Locations

Double click to open it. Click anywhere in the blank space of the graph and change the shading model in the left details panel to Unlit 

Click and drag the RT_MixGrid2D_Locations texture into the graph, and hook up RGB to the emissive color.

That is all we need for this demo setup. This just illustrates that when we are good to go, you can expand on this shader later to be whatever you need it to be using this texture.

Creating the Niagara System (No Grid2D yet)

The niagara system we are going to make today will need 2 specific User Parameters that we will hook into the blueprints provided.

Right click and create a new niagara system and call it NS_MixGrid2D_Locations

Double click to open it up. Then anywhere in the niagara system overview graph, right click and go to 

Add Emitter

From this list, choose Empty
This creates a mostly empty emitter so that we have something to work from that compile properly.

Under properties, we will change the Sim Target to GPUCompute Sim and the Calculate Bounds to Fixed.

Using the GPU in this context is important whenever doing Grid2D simulations since we are iterating over a 256×256 grid (More on that later) 

Fixed Bounds is much cheaper than dynamic bounds and is generally recommended for GPU Simulations.

Next, we will delete the sprite from the render sections since we won’t be needing that for this setup.

Here we can go ahead and add 2 important user exposed attributes to work with. These will be 

Vector 4 Array – LocationsAndSizes

And a Texture Render Target called TextureRenderTarget.

LocationsAndSizes will be the data we will be using to tell the circles where to draw and how big. 

TextureRenderTarget will be set to RT_MixGrid2D_Locations from Blueprint.

We can now save the Niagara system, and jump over to the 2 blueprints included in the example project.

Included Example Blueprints Overview

I think it’s important to understand where the data is coming from and how it is connected. This is why I think it’s good to run through the paces of creating most of the content from scratch. However, this isn’t necessarily a blueprint tutorial, so I didn’t want to spend a ton of time going over blueprint basics or anything like that. Instead, we have 2 blueprints provided to work with and modify with our new assets we created.

BP_MixGrid2D_LocationActor – The sphere actor that is in the level moving back and forth.

BP_MixGrid2D_LocationManager – The actor that has the plane mesh, niagara system, and sends the data to the niagara system based on the BP_MixGrid2D_LocationActors in the world. 

BP_MixGrid2D_LocationActor 

The BP_MixGrid2D_LocationActor is a simple blueprint setup that is told to move back and forth based on a cached start location and a timeline. It also holds a radius value that we will tap into based on a random size.

Radius will be the 4th element in the array of vector 4 LocationsAndSizes that is sent to niagara.

BP_MixGrid2D_LocationManager 

The BP_MixGrid2D_LocationManager holds a blank niagara system, a plane mesh, and the logic for sending the data to the niagara system. We will modify this actor a little bit to finalize sending the data to our new assets.

The Event Graph’s Begin Play gets all of our BP_MixGrid2D_LocationActors that exist in the world. Then from there we store it into an array called AllLocationActors. This array’s size is then used to tell the vector 4 array called LocationsAndSizes to match however many actors there are in the world. 

Then on Event Tick, we loop through all of the location actors and set each of the corresponding vector 4 array entries to the location actors location and radius.

XYZ = Location 

W = Radius

The important part to really understand is that it’s using this NiagaraSetVector4Array node to send the data to our user parameter we created above in our niagara system.

This is how it connects. So let’s go ahead and make sure that NS_MixGrid2d_Locations in the Blueprint is set up properly.

Click on the component in the components list.

Then set the Niagara System Asset to the new one we just creation NS_MixGrid2D_Locations

And ALSO set the TextureRenderTarget user parameter to our Render Target we just created. RT_MixGrid2D_Locations

This is all we need to do for the niagara system to be hooked up and getting the data.

Last thing we need to do for this blueprint is click on the RenderTargetPlane in the Components list.

And change the material the plane uses to the MM_MixGrid2D_Locations material we made.


After hitting play to test the scene, you will now see a black plane and the spheres moving. 

If you hit F8 while playing, it will detach the camera from the pawn and we can move and click on the plane.

In the details panel you can expand LocationsAndSizes to see the data we are sending real time.

This is just a good way to visualize all of the values for debugging purposes really easily.

Setting up Grid 2D from scratch to draw to a texture.

Feel free to skip this section if you already know how to setup Grid2D and simulation stages.

Let’s open up NS_MixGrid2D_Locations again.

We will now set up our Grid2DCollection and RenderTarget data interfaces to work with.

From a basic perspective, Grid2D is simply just a data set that allows you to start data per cell in a 2D grid. In our case, we will be treating each cell as if it were a pixel on a texture. But that is just one simple example of usage, you can store any data you want and manipulate it over time however you want.

RenderTarget data interfaces are specifically a way to store and iterate through each pixel of a render target.

Under Emitter Attributes, we will add a Grid2DCollection and call it Grid

And then we will add a RenderTarget2D and call it RenderTarget

NOTE – RenderTarget2D is completely different than TextureRenderTarget. RenderTarget2D is an interface that allows the manipulation of TextureRenderTarget data. It’s easy to confuse the two and their usage.

It should look like this.

Now under Emitter Spawn on the emitter itself, we can hit the orange + icon and go to Set Parameters.

Then when selected, it will give you the option to add parameters to set. Hit the plus button and add the Grid and RenderTarget we setup.

From here, we will override our buffer format and set it to Half Float.

Then set our texture size to 256×256

And finally, update the RenderTarget User Input to be our TextureRenderTarget user parameter.

We also need to add another module to spawn called Grid 2D Set Resolution*

NOTE: the * means that you need the niagara fluids plugin activated for it to even show up (The example project has this on). Also, you must disable the library checkbox for it to show up properly in the search display. Hopefully this will be fixed in later versions of Niagara.

For the set resolution module, we need to hook up the grid to be our grid attribute and set the num cells X and Y to our render target resolution.

Now we will add simulation stages to iterate over this data in passes. All a simulation stage really is doing in our case is just changing it from iterating over each particle, to iterating over each cell of the grid which is basically each pixel color of our render target. 

Adding a simulation stage is pretty simple, find the Stage button and click it to add a new stage.

This will create a new stack group in the emitter called None 

When clicking on Generic Simulation Stage Settings, we can set the name to WriteToGrid.

We can also change the iteration source to Data Interface. And set it to be our Grid data interface.

This is how the simulation stage will know what data to store and manipulate and also how to iterate over the data per cell instead of per particle.

Next, we will create 2 more stages. 

Initialize

This will initialize our grid data once with the proper default values that we set on our attributes later.

WriteToTexture

This will be set to iterate over our render target interface NOT the grid.

Make sure the order of the stack is as follows.

Now that we have this setup, we can start actually using grid 2D to write a basic color to our texture for testing.

Under the WriteToGrid stage, we can hit the green plus icon to add a new scratch pad module.

When doing this, it will add a new module to Local Modules. We can rename it to be also WriteToGrid

We can do the same for adding a new scratchpad under the WriteToTexture as well.

What we can do now is set each grid cell to contain a blue color for testing.

In the WriteToGrid local module. Let’s add a color to the map set.

And call it RGBA

Then we can right click that variable and change the namespace to StackContext

StackContext is really important in order to automatically write this data to the Grid2D cell. It specifically refers to the simulation stack that we are currently in which is iterating over our Grid2D Interface. 

And finally change the color to blue.

I know going over each added thing is verbose, but I think it’s important in order to understand the quirks, since it’s really easy in niagara to get lost if you accidentally mess up one thing. Here is what WriteToGrid should look like right now. 

We can hop over to WriteToTexture now and setup sampling the grid and writing the blue color to each pixel.

In our Map Get we can add an input for a Grid2DCollection which we will later hook up to the Grid attribute we made earlier. 

We can call this Grid as well.

We will do the same with the RenderTarget2D and call it RenderTarget

From here we can drag off of the Grid input and and create a SamplePreviousGridVector4Value node along with an Execution Index to Unit node. It’s important here to set the Attribute to the exact same name as the attribute we used initially to set the blue color. In this case it is RGBA

We can now drag off of the RenderTarget pin and create a SetRenderTargetValue node. Then hook up the Value from the SamplePreviousGridVector4Value and finally drag off the Grid again and do ExecutionIndexToGridIndex which will plug directly into IndexX and IndexY

This gives us a graph that looks like this for Write To Texture.

We now need to hit Apply Scratch to compile the module and then go back to our emitter and select the WriteToTexture module.

Here we will see empty attributes. We will need to fill them in with our Grid and RenderTarget by clicking the drop down arrows.

Giving us a result that looks like this.

Now you will see that our texture is blue! Note: you may need to hit play to see the results. It working without play-in-editor is a bit inconsistent.

This is the basics! From here we can start doing the fun stuff!

Drawing the locations to the Render Target

Now that we have a basic setup and our emitter looks like this. We can talk about what we are actually here to do! Taking the LocationsAndSizes data and using that to draw directly to the texture!

I think it’s important to go over the thought process here.

I set the write to grid color back to black before moving forward with this top down image of the spheres.

When iterating over each cell of the Grid2D, we are acting as if it is the pixel color information. 

Using the distance from each point (Converted to worldspace) to the input location, we can determine if the pixel should be a different color inside and outside of the circle. In the example below, the red pixels are outside, white inside. In our final example red will be replaced with black.

So moving forward now with this idea in mind, we can jump back into our WriteToGrid module. 

We will need to set up an input for the Grid the same as we did for the WriteToTexture module.

This gives us something like this. 

Now we can establish our grid location by converting the current cell we are executing on into world space so we can properly do our distance check.

NOTE: for the ease of this tutorial, I have set things up so that the plane in blueprint is at 0,0,0 in world space. If you wanted to change that, you would have to offset this math by the location desired. 

Now we can drag off of the Grid and get the ExecutionToGridIndex  and GetNumCells 

With this we can center the grid data by offsetting it by -0.5 * CellCount

From here we can make a vector using the X and Y values and leaving Z set to 0.

Then we will multiply this by another Input float value called WorldScale

This multiplier is how we calibrate the scale to match perfectly with the size of the plane.

We can drag off the result now and set this to a local attribute called GridLocation

Dragging off of the Dest Exec pin we can do another Parameter Get, and get the In Grid location.

We can then break the vector and connect it to our RGBA output to test if the values are what we expect.

The full module at this point.

As before, we need to also hook up our inputs properly. So we can jump back to the emitter and select the WriteToGrid module. We can set our Grid and then set the WorldScale to 1.0

This will give you a result that should look like this.

The only reason the texture is black on one side is because those values are negative, thus rendering black. Now that our method is working properly for converting to worldspace, we can jump into iterating over the locations per cell to check their distances!

So back to the WriteToGrid module we can delete the test setup and keep the get node with the Grid location.

We will also add a few more inputs to save us some time
Input Vector 4 Array: LocationsAndSizes

Input float: RadiusScale

(Existing) RGBA 

(Existing) DeltaTime 

We will be using all of this data inside of a custom HLSL node.

Custom HLSL Setup

Why use Custom HLSL? 

Sometimes the logic is complex enough that it may feel easier to code it in HLSL rather than with nodes. In our case, however, we need to add a for loop so we can iterate over all of our locations which you simply cannot do with nodes. They do warn as well that it is not advised to do so, my guess is it’s related to stability of the editor and potentially the game if you get too complex with what you are doing in a for loop. In this case, I haven’t seen any issues.

If you are already familiar with Custom HLSL, you can feel free to skip to the end where the full code snippet is shared rather than taking the steps. However, if you are new to it, I highly encourage you to follow the steps here since there are a few gotchas along the way.

What we will do first is drag off of GridLocation and create a new CustomHLSL node.

We will promptly delete all of Epic’s advice in gree here so the node is a bit smaller and blank for us to work with. 


Then we will right click the GridLocation pin on the node and Rename it to InGridLocation.
NOTE: When dragging inputs into the custom HLSL node, I have found that renaming them so they are unique to the graph is very important to prevent issues with data manipulation and bugs. So for this tutorial I am preventing the issues by adding the prefix In to every HLSL input.

Next we will drag all 4 of the other pins as well and rename them using the prefix. We will also drag from the output pin into RGBA and rename the output to OutRGBA.

After doing this, you will get 2 errors when you click Apply Scratch

These error are complaining about the fact that the input RGBA for the stack context has no default value. We can assign one by going to our Parameters tab and clicking on the StackContext RGBA and adjusting its DefaultMode to Value instead of Fail if not previously set.

Also, once again we have to go back to our emitter and setup the inputs properly before we move on.

With this we can jump back into the WriteToGrid module and add some simple code to the Custom HLSL for testing.

OutRGBA = InGridLocation.x;

This code is mimicking our test from before so the output result looks exactly the same if the HLSL node is functioning properly!

Now we can finally enter in the code we need 

The Finished HLSL Code

Good test settings for this demo are setting RadiusScale to 2.8 and WorldScale to 6.

Then for our custom HLSL we can enter the final code in.

int Out_Num;
InLocationsAndSizes.Length(Out_Num);

float4 NewGridValue = 0;

for(int Index = 0; Index <= Out_Num; Index++)
{
	float4 Out_Value;
    InLocationsAndSizes.Get(Index, Out_Value);
    const float Distance = length(Out_Value - InGridLocation);
    if(Distance < (Out_Value.w * InRadiusScale))
    {
         const float Falloff = (1 - (Distance/(Out_Value.w * InRadiusScale)));
         NewGridValue = NewGridValue + Falloff;
    }
}
OutRGBA = lerp(InRGBA, NewGridValue, InDeltaTime * 5);
  1. First we get the length of the array for LocationsAndSizes and store that into a value called Out_Num.
  1. Next we create a new float4 value called NewGridValue and assign it to a default value of 0.

This is what will be evaluated and added to during the for loop.

  1. Then we setup the for loop to iterate through the length of the LocationsAndSizes
  1. Then we get the value based on the index value of the for loop. This value is a float4 called Out_Value.

OutValue’s XYZ is the location and w is the Radius.

  1. Next, we calculate the distance by subtracting the 2 locations and getting the length of the vector afterwards. 
  1. We then calculate a smooth falloff using the current distance value and the radius of the circle * the multiplier value.
  1. Then we add the falloff value on top of the NewGridValue. This allows multiple circles to stack on top of each other additively.
  1. And finally we Interpolate between the previous frame’s RGBA and our current NewGridValue to get a smooth output result. The multiplier on InDeltaTime is how strong it should blend in the new frame on top of the old one.

The output result is that the spheres should now be blending smoothly as they get closer to the plane. 

That concludes setting up the basics of this method to give you something you can drive many different things off of. I hope this helps and am excited to see what other folks can do with this method!

Taking this further.

For Mix Universe, I also have a FogAdjustments and FogColors array. The colors come in and show up when the nodes play, the sizes also adjust slightly during this moment allowing for the fog to shrink and grow.

Hope this helps!

And that is it! I hope this is useful and please check out Mix Universe if you get the chance and are interested in learning more about the project or seeing the fog in action. 

If you are interested in learning more about Grid2D and simple fluid sims, check out Partikel’s Grid2D tutorials as these are what I used to get started with the basics!

Grid2D Quickstart
https://www.youtube.com/watch?v=XVKpofOj44c
Grid2D Advection
https://www.youtube.com/watch?v=4NxBonHkyNg

Hope you have a good day!

UE5 – MetaSound Performance + Latency Tips

UE5 – MetaSound Performance + Latency Tips

If you’ve started to dive into making systems for your game using MetaSounds, you may have found yourself with some minor hitches or latency when playing the sounds.

These tips might help!

Try toggling the Async MetaSound Generator.


When this is on, it reduces the CPU cost during play, however, it can lead to latency with larger MetaSounds. It might be worth turning it off (Mix Universe has it off to ensure that a sound will always play at the right time)

au.MetaSound.EnableAsyncGeneratorBuilder 0

Set the MetaSound BlockRate to a lower value


I’ve found 28 is a sweet spot for latency and performance for lower end machines. Lower numbers will increase latency, but decrease CPU usage. Higher numbers will decrease latency, but increase CPU usage. Default is 100, and this is described in code as “blocks per second” when processing the audio for a MetaSound.

au.MetaSound.BlockRate 28

Make sure to enable stream caching and force streaming on your audio files.


In project settings, enabling stream caching will ensure that audio files can be processed as fast as possible during runtime.
On your wav files you can force streaming and make them seekable to take full advantage of wave samplers in MetaSounds

Try choosing ADPCM or PCM for your audio compression.


PCM is uncompressed and has a high memory footprint, but will result in the fastest audio playback possible.

ADPCM is 4x compression which may cause artifacts, but typically is ok for sounds that don’t have a ton of high frequency detail.

What these formats avoid is any types of decoders which will cause CPU performance overhead and latency. (Especially on something like the Quest)
You can monitor this using stat audio

Consolidate or reduce inputs


Here is a pretty complex MetaSound I am using for example. I have found around 30 is the sweet spot right before it starts to cause real issues. It may vary depending on the input type, but this was my findings with ints and floats.

Right now, the only real way to improve this is to just simply reduce the number of inputs you are sending to a MetaSound. It doesn’t really matter which ones are sent during play.

Reduce usage of Compression or Delay Nodes


Similarly, any nodes that use a large buffer size or a copy of a previous audio buffer can add up really quickly. Reducing lookahead time helps for compressors and also just in general, not using too many of these in your graph. I’ve found 1 of each is fine, but if you start pushing it too far it will start to show when playing your sound!

Reducing Lookahead time will decrease latency and also help with performance.

If you see constructor pins, they can save you!


As MetaSound evolves, you will notice diamond shape connection points. These are constructor pins and are very important for performance. They are only ever evaluated once and cannot change with inputs, so that means they can be optimized to be super cheap. In the case of a delay, this allows you to set the max allowed delay time to limit the audio buffer allocated for the delay form ever going over a certain length. Smaller numbers in this make the node much cheaper.

Diamonds are constructor pins.

What if none of this works?


Then it is time to dive in deeper. There are many ways to do this, but I personally get the most info initially from doing a simple CPU profile using stat startfile and stat stopfile

stat startfile

stat stopfile

Then using the unreal frontend to see what is happening when a hitch occurs!

If you are new to performance profiling, I would highly recommend this talk as it goes over the basics on the thought process when it comes to solving a performance problem!

Then finally, if that isn’t enough, you can look into Unreal Insights a bit more in-depth which will give you the most detail about what is happening to your frames!

https://docs.unrealengine.com/4.26/en-US/TestingAndOptimization/PerformanceAndProfiling/UnrealInsights/

I hope this helps and you can continue making awesome MetaSounds in UE5!

Project Mix Pre-Alpha Demo Video Using Quartz Instead of TimeSynth

Project Mix Pre-Alpha Demo Video Using Quartz Instead of TimeSynth

It has been quite awhile since I have done a formal write-up going over the details of project mix and what to do next. Right off the bat, we have some sort of an art style starting to evolve! However, I think there might be a few other options I’d like to explore.

Going From TimeSynth to Quartz in UE4.26

As much as I loved the Time Synth, this is something that was absolutely crucial to making progress. I was spending so much time trying to integrate features that already exist for every other audio thing in the engine and fix audio crackles / decoding issues with the Time Synth Component. The good thing is, I now have way more of an idea of how audio actually works in unreal! I scrapped about 1000 lines of engine override code and now directly integrated everything using quartz.. it was a ton of work but ended up being really powerful!

What is Quartz?
This is the new subsystem that allows you to Play any audio component quantized to a beat clock. This means that you no longer have to activate 5 plugins to get something like this working AND .. the biggest win is it is truly available on ANY AUDIO COMPONENT. Which means it also works with audio cues!

Here is a very very basic setup of Quartz.

https://pbs.twimg.com/media/Emi7tV-XEAAnbWX?format=jpg&name=4096x4096

On Begin Play I get the quartz subsystem and make a new clock! This node also allows you to override an existing clock which means its safe to call multiple times. Clocks are sorted by name and can be called upon by name later too so make sure you have an actual name in there. From there I save the quartz clock as a variable and then subscribe to a quantization event. You can think of this as hooking into the system so that every time a beat plays, it fires an event.

NOTE: This event is always going to be at the start of a beat, which means that it will give plenty of time (If your frame-rate isn’t terrible) to setup the audio that is needed to play next beat. This part has always been tricky to work around even with the Time Synth. Basically you have to treat everything like a queue and make sure that whatever system you are writing is ahead a certain amount of time so that when you call this “Play Quantized” node on an audio component, it is exactly 1 beat behind where you want it to actually play.

If you’d like to see more, you can head over to “Dan Reynolds Audio” on Youtube. He goes over a pretty nice introduction of exactly this type of thing and a little more details.

The point here is that there is so much potential with how to use this system I could talk for hours about it!

How does Project Mix work with Quartz?

Some of the key things that quartz did was fix timing issues. After awhile TimeSynth would desync and become rather unreliable. It also had quite a few crashes that I was accounting for in a really bad way!

Everything syncs to a beat!

The beat clock is the most important thing to a system like this. Quartz Subscription to quantization event means that we can essentially treat one of these as our main driver for activation! So starting off, I subscribe to a quartz event on 1/16 notes. 1/32 is a bit too close to tick for comfort currently and complicates the process of setting up timing a bit too much in my opinion. I might switch it later, but I’ve found 1/16 to be the most reliable.

Once subscribed to a 1/16th note quantization event, I then send this event to every node and link in the system. From here they will decide what to do based on which beat they think they should be activated on next. Remember what I said before where you have to queue up a beat before-hand in order to properly quantize? Yea…. that makes thing quite a pain is the ass! However, essentially what I do is just make sure the entire system is a beat ahead so that to the player, they won’t really know the difference!

To understand how everything works with project MIX a bit more in-depth lets talk about nodes and links. In the MIX system I call them Devices and all of the Devices operate from a brain that I call “The Machine”

The Machine.

This is where the heartbeat is established and everything related to events for the system is handled in here as well. The machine’s job is to essentially distribute the heartbeat to every device in the system! This also means we can control order of ticking, what actually is ticking, etc all from one place without having to do custom tick groups or any junk like that.

Machine Device Nodes

These operate simply be being the sound makers and general activators. They have access to links and also have all of the information on what sample they are supposed to play. We can go over samples later! Simply put though, nodes are the intersections in a highway type system that tell other links to activate and whether or not to play sound.

Nodes can have an infinite amount of links that activate it or that need to be activated after the node is activated.

Machine Device Links

Links are connected to a START and and END node. When a START node is activated, we then tell the END node that it should be activated in however many beats the link is active for. Links can only be activated once per duration of activation. This means that if a Node is to fire off twice while the link is still active, it will ignore the second activation request. Simple enough right? Well… this is where things get hairy.

Solving some pretty critical sync issues! ( a bit of a ramble but could be useful to someone in a similar tricky scenario)

One of the trickiest hurdles to solve was the order of beats. Imagine this scenario, Nodes and links can activate when they think they should be active. However timing and everything is very important to be setup during these activated states. For instance.. when a Node gets activated, it sets up how long it needs to be active for and tells links that they should do the same. Unfortunately, there is really only one event that gets sent out to do this. Which means in some cases when dealing with 1000’s of nodes all trying to figure out when they should activate, you run into situations where the activation was sent but the order of activation ends up making it so that it needs to wait until the next heartbeat in order to properly activate it! Which is not ideal at all since 1 more heartbeat would lead to the result of something being off sync. The fix was making sure to treat the state machine on the devices as a desired stated instead of instant changes. This allows for a loop to constantly run that handles the state changes which gets populated anytime the state needs to change. This allows for the order to work as requested and for everything to be finished in the same heart beat event!

For those that are C++ savy, here is what I mean.

When a device state change happens, we add it to an array on the machine that is constantly running until it finishes during the quantization beat event.

void AMIXDevice::SetDeviceState_Implementation(EMIXDeviceState NewDeviceState)
{
	if (NewDeviceState != NextDeviceState)
	{
		if (IsValid(Machine))
		{
			Machine->AddDeviceNeedingStateChange(this);
		}
		NextDeviceState = NewDeviceState;
	}
}

The event that handles sending out the quantization event to the devices.

void AMIXMachine::SendQuantizationToDevices_Implementation(FName ClockName, EQuartzCommandQuantization QuantizationType, int32 NumBars, int32 Beat, float BeatFraction)
{
	// Send quantization events out to connected devices.
	for (int32 DeviceIndex = 0; DeviceIndex < Devices.Num(); DeviceIndex++)
	{
		if (Devices.IsValidIndex(DeviceIndex))
		{
			AMIXDevice* Device = Devices[DeviceIndex];
			if (IsValid(Device) && Device->QuantizationType == QuantizationType)
			{
				Device->MachineQuantizationEvent(NumBars, Beat, BeatFraction);
				Device->HandleDeviceStates(true);
			}
		}
	}

	// Now handle state changes properly. This is a dynamic array that will get populated any time a state change happens! 
	for (int32 DeviceIndex = 0; DeviceIndex < DevicesNeedingStateChanges.Num(); DeviceIndex++)
	{
		if (DevicesNeedingStateChanges.IsValidIndex(DeviceIndex))
		{
			AMIXDevice* Device = DevicesNeedingStateChanges[DeviceIndex];
			if (IsValid(Device) && Device->QuantizationType == QuantizationType)
			{
				Device->HandleDeviceStates(false);
				DevicesNeedingStateChanges.RemoveAt(DeviceIndex);
				DeviceIndex -= 1;
				continue;
			}
		}
	}
}

The result is this! The ability to accurately handle about 200 devices in a scene all at 90fps at 300bpm with 1/16 notes!

Wee!

Samples

This is very very much still subject to change, but one of the defining characteristics of project mix is the ability to compose something out of nothing. This requires a sample library that is setup in a way to handle multiple velocities and per note samples. The DAW that I primarily use is FL-studio. Ultimately I would love to find a way to export directly from FL-studio into the library format I use so that I could compose an entire song and get it imported into the link and node format (Maybe some day! Baby steps first!)

It Starts with a DAW

In FL-studio I setup a project that has markers for each note on an 88 key note scale. All Samples are recorded in a way that is faithful to the copyright agreements for each of the different companies I obtain them from. This being said, I still needed to contact each company individually to make 100% sure that it wasn’t in violation of their terms as some of them require further licensing/permissions before allowing something like this to not be considered “Reselling a Sample Pack”. Lastly, nothing is used without modification and nothing is saved off 1-1. Furthermore when the product is cooked, it is compressed by 40% so they will never be the same as buying the plugins, kits, or whatever and using them for production.

From here I export 1 file out and then split it into multiple files based on the markers using FL-studio’s “Edison” plugin.
Using Edison to split the audio into multiple wav files.
Exported audio files.

Importing Samples Into Unreal

Now that we have the files setup, we can go ahead and get them into unreal. This part is mostly tedius, but I have setup a tool in unreal using an editor utility widget that can search and parse the files by name to get them into a data format I expect.

The folder structure I use is Audio/Samples/Type/Velocity. (Soft ,Medium, and Hard). Velocity refers to how hard a key was pressed on the piano. This allows us to blend between sounds rather nicely.

From here, I need to generate the Audio Synesthesia data which will be used by the system to generate visualization data for the nodes to use when the sound activates. I was able to do this by making an editor utility widget to handle creation.

Blueprint inside of the Editor Utility Widget that handles creation of NRT assets when selecting the soundwaves.

Once I have that all setup, I create a data sample asset which is a custom one that has a TMAP in it for storing all of this per note. ANOTHER NOTE: (Pun intended) whenever setting up data structures like this in unreal its extremely important to use soft references or primary data assets. This allows you to handle the Loading of your data manually instead of unreal thinking you need to load everything whenever you reference the data structure directly. Its really good practice to do so and will prevent turmoil in the long run!

Node Settings for Samples.

On each node I made a structure for playing samples.

Channel refers to what track on the final mixer to use. There are 16 available so that I can mix each instrument individually and then on top of that each sample has volume controls too. This is also something that is pretty subject to upgrades and changes since its widely going to be how complex I am wanting to go in the future.

Utilizing the basics of Audio Synesthesia for loudness visualization.

Why audio synesthesia and not just the build in FFT data?

I have found that the built in FFT system doesn’t account for perceived loudness nearly as well and also has points where it seems to not be as accurate as I would like it to be. Audio Synesthesia is pretty spectacular and decently easy to use.

In the future I would love to do spectral information too and not just loudness, but for now I am starting off simple. All of those NRT assets I created now can be utilized when making the sounds. This is actually a ton more complex than anticipated. In blueprint they require you to bind to an active sound in order to get the current playback time of a sound component. I CANNOT use blueprint that much anymore due to the CPU Overhead of the blueprint virtual machine. In code I really didn’t want to set this up with delegates, it would just massively complicate the process of checking playback time on potentially 100s of active sounds. So instead, I added some engine overrides to the audio component.

In AudioComponent.cpp

//ENGINE OVERRIDE 
TArray<USoundWave*> UAudioComponent::GetPlaybackTimes(TArray<float>& OutPlaybackTimes, TArray<float>& OutVolumes)
{
	TArray<USoundWave*> Output = TArray<USoundWave*>();

	float FadeVolume = 0.0f;
	if (FAudioDevice* AudioDevice = GetAudioDevice())
	{
		if (IsActive())
		{
			const uint64 MyAudioComponentID = AudioComponentID;
			FActiveSound* ActiveSound = AudioDevice->FindActiveSound(MyAudioComponentID);
			if (ActiveSound)
			{
				FadeVolume = ActiveSound->ComponentVolumeFader.GetVolume();
			}
		}
	}

	if (IsPlaying() && SoundWavePlaybackTimes.Num() > 0)
	{
		for (auto Entry : SoundWavePlaybackTimes)
		{
			Output.Add(Entry.Value.SoundWave);
			OutPlaybackTimes.Add(Entry.Value.PlaybackTime);
			OutVolumes.Add(FadeVolume);
		}
	}
	return Output;
}

This gives me the playback time on every sound that is playing from that component. One issue you’ll notice out of the gate is “FindActiveSound” has an audio thread check in it to ensure that its only called on the audio thread… I just commented that out that for now. I could potentially route it much nicer later using the proper “Run On Audio Thread” commands, but I have found that the audio team in particular is a bit overzealous with their check()’s in code to make sure the audio system is as stable as possible (Makes a ton of sense, but also means that most of the time it seems to be ok breaking some of these rules).

Using that, I can then gather the needed information for the perceived loudness of a sound on tick for any of the active sounds in my audio pool.

The Custom Audio Pool on The Machine

When a node plays a sound, I end up making an audio component on the fly and saving a reference to it in an audio pool structure. This is where I can then grab all of the sounds on tick and update the visual information for each sound in the pool. When they are finished I can then make sure they are properly removed. With this method, I am only processing what I need to and then I can also handle the proper routing of visual information as well. This information is 100% stored on the machine so that it only gets calculated once! Any information the node needs is sent over to it after its calculated.

Node Visuals Using the NRT Data

Finally we then can use this to drive a material that handles the visuals! This part is still in blueprint, but likely will be moved later.

For the nodes, we have 2 fresnels. On for the sharp outer line and another for the sharp inner circle. This is placed on a spherical object and based on the settings we can control how much it animates. I called the parameter “Size” initially but it really should be called ActiveLoudness.

Material for nodes

Material for the nodes (Size is the ActiveLoudness)
Material instance

The links and any other visualizer on the project has access to the NRT data and can use it knowing the only cost is the animated visual itself rather than calculating the data again.

If anyone has any questions about

What’s Next for Project Mix?

There are a ton of roads that I could take for a project like this. Some people have suggested having the interactivity being driven from a DAW like ableton real-time for live concerts or experimentation. I think that would be pretty sweet but also super complex and I am also 99% sure epic is already doing something like that on a much larger scale for artists. So my initial focus is going to be as more of a toy beat maker thing.

  • Getting Interactivity with the nodes in a playable state.
  • Making new nodes on the fly in-game
  • Controlling links in-game
  • Potentially making links distance-based for their timing which would help gamify it a bit.
  • Sequence Nodes (Allowing nodes to play on a sequence)

Next Major Milestone is to be able to start with a blank level and make an entire composition from scratch. This will help prove the system out on a scale of usability that will allow it to grow wings in other areas. I am excited! Woot!

Thanks for making it this far and if you’d like updates or to stay in touch be sure to follow me on Twitter and/or subscribe to me on youtube.

UE4 How to make an actor that can Tick In the Editor

UE4 How to make an actor that can Tick In the Editor

Solution #1 – Make an Editor Utility Widget and use the widget’s tick function to drive functionality on blueprints in the world.

This is probably the easiest of the 2 routes. The only downside is the tick will only run if the editor utility widget is open somewhere. If you need more robust and automatic tick functionality then please refer to solution #2 below.

Solution #2 .. Make your own actor class!

In your custom actor class you can use this to add a bool that toggles being able to tick in the editor. This way whenever you need this you can just simply check it on and get going. It also is separated so that you can have different logic happen in the editor vs in-game. Most of the time I end up with 1-1 anyway but its nice to have the option.

header file

/** Allows Tick To happen in the editor viewport*/
virtual bool ShouldTickIfViewportsOnly() const override;

UPROPERTY(BlueprintReadWrite, EditAnywhere)
bool UseEditorTick = false;

/** Tick that runs ONLY in the editor viewport.*/
UFUNCTION(BlueprintImplementableEvent, CallInEditor, Category = "Events")
void BlueprintEditorTick(float DeltaTime);

cpp file

// Separated Tick functionality and making sure that it truly can only happen in the editor. 
//Might be a bit overkill but you can easily consolidate if you'd like. 
void YourActor::Tick(float DeltaTime)
{
#if WITH_EDITOR
	if (GetWorld() != nullptr && GetWorld()->WorldType == EWorldType::Editor)
	{
		BlueprintEditorTick(DeltaTime);
	}
	else
#endif
	{
		Super::Tick(DeltaTime);
	}
}

// This ultimately is what controls whether or not it can even tick at all in the editor view port. 
//But, it is EVERY view port so it still needs to be blocked from preview windows and junk.
bool YourActor::ShouldTickIfViewportsOnly() const
{
	if (GetWorld() != nullptr && GetWorld()->WorldType == EWorldType::Editor && UseEditorTick)
	{
		return true;
	}
	else
	{
		return false;
	}
}

Then when you are done you should be able to add the new BlueprintEditorTick to your event graph and get rolling!

Why not use an Editor Utility Actor?

Editor Utility Actors don’t quite get you the exact functionality that seems useful enough for ticking in the editor via an actor. It will tick in the preview windows of blueprint and potentially lead to lots of head scratching as you realize that the blueprint you were working on is logging when you are trying to use it in other places. Also, there are times where you actually do want the same actor runtime as well (It’s rare but totally a valid case).

Final Notes

This is pretty dangerous if you aren’t mindful with what nodes you use in here. Try not to do things like Add Component or any other nodes that would spawn objects into the world or if you do make sure you store them and clean them up. Delays might work but they could be a bit odd as well. Just be aware that it can be fragile at times so make sure to give unreal some cake beforehand… ❤️

Implementing Undo in UE4 Editor Utility Widgets and Blueprints

Implementing Undo in UE4 Editor Utility Widgets and Blueprints

This image shows an example of how to setup an undo-able function call in unreal 4’s blueprints. This was reported as a bug and marked “by design” simply because you can do it 100% controllable and custom in blueprints

Begin Transaction

Starts the undo stack and allows for anything after this call to be grouped into 1 undo operation.

Transact Object

Every object you want to be undo-able has to be added by using the Transact Object node. Then you can do whatever you want to the object and it will properly go back to whatever it was before doing anything to that object.

End Transaction

Closes the gates and stops recording anything else to the undo stack. Which then will allow you to finally undo properly you custom tool’s operation.

This was a bit tricky to find since typing the word “Undo” in blueprint doesn’t give you much. Under the hood though everything undo-based is actually referred to as the Transaction System. Which is why these nodes are called Transaction Nodes.

That is all, hope this helps and Have a great day!