Niagara Grid 2D feels like a superpower! Drawing Locations To a Render Target in Unreal 5.1

Niagara Grid 2D feels like a superpower! Drawing Locations To a Render Target in Unreal 5.1

Hello folks, my name is Chris Zukowski and I like to be called Zuko for short. I have been working in the game industry for 11 years as an environment artist turned technical artist and currently am a technical director at Terrible Posture Games. However, in my free time I’ve been developing Mix Universe: A Musical Sandbox Game

The main reason for focusing on this tech was to find a quick way to create a texture at runtime that I could use for fog and potentially fluid simulations as well. Previous methods were expensive (Around 2-5ms) and used complex systems (like a camera capture actor and spawning particles only the camera can see or even tapping into a widget component’s render target and spawning circle widgets.)

After a bit of tinkering with Grid 2D and Niagara Simulation stages, the end result is the ability to draw 100’s of circles at a time with a very minimal impact to the CPU and GPU. I decided to make a video and example project showing from scratch how to make this Niagara system and use custom HLSL code to draw the circles. You don’t really need any experience with Niagara for this tutorial, and for those who know your way around, you can skip to your liking! I tried to make the tutorial as flexible as possible.

Full video tutorial is here

Example project to follow along with.

Original Tweet that that propelled me to make a tutorial. Enough folks were really interested in how it was done and I didn’t want to alienate anyone who might not have a lot of Niagara experience. Don’t be scared that there is code in there, it’s really not that bad!!

Full Breakdown (Same as Video Tutorial)

Getting Started

The example project has important blueprints that help get thing rolling without having to focus on getting data. You can download it for whatever price you feel is fair.

It is an Unreal 5.1 project, so you will need to download 5.1 or higher from the Epic Games Launcher in order to run the project.

When you open it up, there are 2 levels that you can open up.

MixGrid2D_Location_Start – Where we are going to start today.
MixGrid2D_Location_Finished – Where you could go if you don’t wanna follow along and just wanna sift through the finished setup.

Here is what we will be making today.

When you open up MixGrid2D_Location_Start, you will see this.

When hitting play, the sphere will be animating, but there will be no texture. We will be making this from scratch.

Creating the Render Target

Create a Render Target by right clicking and going to Textures->Render Target 

We will call it RT_MixGrid2D_Locations

Double click to open it and change the Address X and Address Y to Clamped.

This ensures that the texture will not continue past the edges.

Everything else by default is fine to leave as is!

Making the material.

This will be likely one of the simplest materials we will ever make. Right click and go to Material to create a new one called MM_MixGrid2D_Locations

Double click to open it. Click anywhere in the blank space of the graph and change the shading model in the left details panel to Unlit 

Click and drag the RT_MixGrid2D_Locations texture into the graph, and hook up RGB to the emissive color.

That is all we need for this demo setup. This just illustrates that when we are good to go, you can expand on this shader later to be whatever you need it to be using this texture.

Creating the Niagara System (No Grid2D yet)

The niagara system we are going to make today will need 2 specific User Parameters that we will hook into the blueprints provided.

Right click and create a new niagara system and call it NS_MixGrid2D_Locations

Double click to open it up. Then anywhere in the niagara system overview graph, right click and go to 

Add Emitter

From this list, choose Empty
This creates a mostly empty emitter so that we have something to work from that compile properly.

Under properties, we will change the Sim Target to GPUCompute Sim and the Calculate Bounds to Fixed.

Using the GPU in this context is important whenever doing Grid2D simulations since we are iterating over a 256×256 grid (More on that later) 

Fixed Bounds is much cheaper than dynamic bounds and is generally recommended for GPU Simulations.

Next, we will delete the sprite from the render sections since we won’t be needing that for this setup.

Here we can go ahead and add 2 important user exposed attributes to work with. These will be 

Vector 4 Array – LocationsAndSizes

And a Texture Render Target called TextureRenderTarget.

LocationsAndSizes will be the data we will be using to tell the circles where to draw and how big. 

TextureRenderTarget will be set to RT_MixGrid2D_Locations from Blueprint.

We can now save the Niagara system, and jump over to the 2 blueprints included in the example project.

Included Example Blueprints Overview

I think it’s important to understand where the data is coming from and how it is connected. This is why I think it’s good to run through the paces of creating most of the content from scratch. However, this isn’t necessarily a blueprint tutorial, so I didn’t want to spend a ton of time going over blueprint basics or anything like that. Instead, we have 2 blueprints provided to work with and modify with our new assets we created.

BP_MixGrid2D_LocationActor – The sphere actor that is in the level moving back and forth.

BP_MixGrid2D_LocationManager – The actor that has the plane mesh, niagara system, and sends the data to the niagara system based on the BP_MixGrid2D_LocationActors in the world. 

BP_MixGrid2D_LocationActor 

The BP_MixGrid2D_LocationActor is a simple blueprint setup that is told to move back and forth based on a cached start location and a timeline. It also holds a radius value that we will tap into based on a random size.

Radius will be the 4th element in the array of vector 4 LocationsAndSizes that is sent to niagara.

BP_MixGrid2D_LocationManager 

The BP_MixGrid2D_LocationManager holds a blank niagara system, a plane mesh, and the logic for sending the data to the niagara system. We will modify this actor a little bit to finalize sending the data to our new assets.

The Event Graph’s Begin Play gets all of our BP_MixGrid2D_LocationActors that exist in the world. Then from there we store it into an array called AllLocationActors. This array’s size is then used to tell the vector 4 array called LocationsAndSizes to match however many actors there are in the world. 

Then on Event Tick, we loop through all of the location actors and set each of the corresponding vector 4 array entries to the location actors location and radius.

XYZ = Location 

W = Radius

The important part to really understand is that it’s using this NiagaraSetVector4Array node to send the data to our user parameter we created above in our niagara system.

This is how it connects. So let’s go ahead and make sure that NS_MixGrid2d_Locations in the Blueprint is set up properly.

Click on the component in the components list.

Then set the Niagara System Asset to the new one we just creation NS_MixGrid2D_Locations

And ALSO set the TextureRenderTarget user parameter to our Render Target we just created. RT_MixGrid2D_Locations

This is all we need to do for the niagara system to be hooked up and getting the data.

Last thing we need to do for this blueprint is click on the RenderTargetPlane in the Components list.

And change the material the plane uses to the MM_MixGrid2D_Locations material we made.


After hitting play to test the scene, you will now see a black plane and the spheres moving. 

If you hit F8 while playing, it will detach the camera from the pawn and we can move and click on the plane.

In the details panel you can expand LocationsAndSizes to see the data we are sending real time.

This is just a good way to visualize all of the values for debugging purposes really easily.

Setting up Grid 2D from scratch to draw to a texture.

Feel free to skip this section if you already know how to setup Grid2D and simulation stages.

Let’s open up NS_MixGrid2D_Locations again.

We will now set up our Grid2DCollection and RenderTarget data interfaces to work with.

From a basic perspective, Grid2D is simply just a data set that allows you to start data per cell in a 2D grid. In our case, we will be treating each cell as if it were a pixel on a texture. But that is just one simple example of usage, you can store any data you want and manipulate it over time however you want.

RenderTarget data interfaces are specifically a way to store and iterate through each pixel of a render target.

Under Emitter Attributes, we will add a Grid2DCollection and call it Grid

And then we will add a RenderTarget2D and call it RenderTarget

NOTE – RenderTarget2D is completely different than TextureRenderTarget. RenderTarget2D is an interface that allows the manipulation of TextureRenderTarget data. It’s easy to confuse the two and their usage.

It should look like this.

Now under Emitter Spawn on the emitter itself, we can hit the orange + icon and go to Set Parameters.

Then when selected, it will give you the option to add parameters to set. Hit the plus button and add the Grid and RenderTarget we setup.

From here, we will override our buffer format and set it to Half Float.

Then set our texture size to 256×256

And finally, update the RenderTarget User Input to be our TextureRenderTarget user parameter.

We also need to add another module to spawn called Grid 2D Set Resolution*

NOTE: the * means that you need the niagara fluids plugin activated for it to even show up (The example project has this on). Also, you must disable the library checkbox for it to show up properly in the search display. Hopefully this will be fixed in later versions of Niagara.

For the set resolution module, we need to hook up the grid to be our grid attribute and set the num cells X and Y to our render target resolution.

Now we will add simulation stages to iterate over this data in passes. All a simulation stage really is doing in our case is just changing it from iterating over each particle, to iterating over each cell of the grid which is basically each pixel color of our render target. 

Adding a simulation stage is pretty simple, find the Stage button and click it to add a new stage.

This will create a new stack group in the emitter called None 

When clicking on Generic Simulation Stage Settings, we can set the name to WriteToGrid.

We can also change the iteration source to Data Interface. And set it to be our Grid data interface.

This is how the simulation stage will know what data to store and manipulate and also how to iterate over the data per cell instead of per particle.

Next, we will create 2 more stages. 

Initialize

This will initialize our grid data once with the proper default values that we set on our attributes later.

WriteToTexture

This will be set to iterate over our render target interface NOT the grid.

Make sure the order of the stack is as follows.

Now that we have this setup, we can start actually using grid 2D to write a basic color to our texture for testing.

Under the WriteToGrid stage, we can hit the green plus icon to add a new scratch pad module.

When doing this, it will add a new module to Local Modules. We can rename it to be also WriteToGrid

We can do the same for adding a new scratchpad under the WriteToTexture as well.

What we can do now is set each grid cell to contain a blue color for testing.

In the WriteToGrid local module. Let’s add a color to the map set.

And call it RGBA

Then we can right click that variable and change the namespace to StackContext

StackContext is really important in order to automatically write this data to the Grid2D cell. It specifically refers to the simulation stack that we are currently in which is iterating over our Grid2D Interface. 

And finally change the color to blue.

I know going over each added thing is verbose, but I think it’s important in order to understand the quirks, since it’s really easy in niagara to get lost if you accidentally mess up one thing. Here is what WriteToGrid should look like right now. 

We can hop over to WriteToTexture now and setup sampling the grid and writing the blue color to each pixel.

In our Map Get we can add an input for a Grid2DCollection which we will later hook up to the Grid attribute we made earlier. 

We can call this Grid as well.

We will do the same with the RenderTarget2D and call it RenderTarget

From here we can drag off of the Grid input and and create a SamplePreviousGridVector4Value node along with an Execution Index to Unit node. It’s important here to set the Attribute to the exact same name as the attribute we used initially to set the blue color. In this case it is RGBA

We can now drag off of the RenderTarget pin and create a SetRenderTargetValue node. Then hook up the Value from the SamplePreviousGridVector4Value and finally drag off the Grid again and do ExecutionIndexToGridIndex which will plug directly into IndexX and IndexY

This gives us a graph that looks like this for Write To Texture.

We now need to hit Apply Scratch to compile the module and then go back to our emitter and select the WriteToTexture module.

Here we will see empty attributes. We will need to fill them in with our Grid and RenderTarget by clicking the drop down arrows.

Giving us a result that looks like this.

Now you will see that our texture is blue! Note: you may need to hit play to see the results. It working without play-in-editor is a bit inconsistent.

This is the basics! From here we can start doing the fun stuff!

Drawing the locations to the Render Target

Now that we have a basic setup and our emitter looks like this. We can talk about what we are actually here to do! Taking the LocationsAndSizes data and using that to draw directly to the texture!

I think it’s important to go over the thought process here.

I set the write to grid color back to black before moving forward with this top down image of the spheres.

When iterating over each cell of the Grid2D, we are acting as if it is the pixel color information. 

Using the distance from each point (Converted to worldspace) to the input location, we can determine if the pixel should be a different color inside and outside of the circle. In the example below, the red pixels are outside, white inside. In our final example red will be replaced with black.

So moving forward now with this idea in mind, we can jump back into our WriteToGrid module. 

We will need to set up an input for the Grid the same as we did for the WriteToTexture module.

This gives us something like this. 

Now we can establish our grid location by converting the current cell we are executing on into world space so we can properly do our distance check.

NOTE: for the ease of this tutorial, I have set things up so that the plane in blueprint is at 0,0,0 in world space. If you wanted to change that, you would have to offset this math by the location desired. 

Now we can drag off of the Grid and get the ExecutionToGridIndex  and GetNumCells 

With this we can center the grid data by offsetting it by -0.5 * CellCount

From here we can make a vector using the X and Y values and leaving Z set to 0.

Then we will multiply this by another Input float value called WorldScale

This multiplier is how we calibrate the scale to match perfectly with the size of the plane.

We can drag off the result now and set this to a local attribute called GridLocation

Dragging off of the Dest Exec pin we can do another Parameter Get, and get the In Grid location.

We can then break the vector and connect it to our RGBA output to test if the values are what we expect.

The full module at this point.

As before, we need to also hook up our inputs properly. So we can jump back to the emitter and select the WriteToGrid module. We can set our Grid and then set the WorldScale to 1.0

This will give you a result that should look like this.

The only reason the texture is black on one side is because those values are negative, thus rendering black. Now that our method is working properly for converting to worldspace, we can jump into iterating over the locations per cell to check their distances!

So back to the WriteToGrid module we can delete the test setup and keep the get node with the Grid location.

We will also add a few more inputs to save us some time
Input Vector 4 Array: LocationsAndSizes

Input float: RadiusScale

(Existing) RGBA 

(Existing) DeltaTime 

We will be using all of this data inside of a custom HLSL node.

Custom HLSL Setup

Why use Custom HLSL? 

Sometimes the logic is complex enough that it may feel easier to code it in HLSL rather than with nodes. In our case, however, we need to add a for loop so we can iterate over all of our locations which you simply cannot do with nodes. They do warn as well that it is not advised to do so, my guess is it’s related to stability of the editor and potentially the game if you get too complex with what you are doing in a for loop. In this case, I haven’t seen any issues.

If you are already familiar with Custom HLSL, you can feel free to skip to the end where the full code snippet is shared rather than taking the steps. However, if you are new to it, I highly encourage you to follow the steps here since there are a few gotchas along the way.

What we will do first is drag off of GridLocation and create a new CustomHLSL node.

We will promptly delete all of Epic’s advice in gree here so the node is a bit smaller and blank for us to work with. 


Then we will right click the GridLocation pin on the node and Rename it to InGridLocation.
NOTE: When dragging inputs into the custom HLSL node, I have found that renaming them so they are unique to the graph is very important to prevent issues with data manipulation and bugs. So for this tutorial I am preventing the issues by adding the prefix In to every HLSL input.

Next we will drag all 4 of the other pins as well and rename them using the prefix. We will also drag from the output pin into RGBA and rename the output to OutRGBA.

After doing this, you will get 2 errors when you click Apply Scratch

These error are complaining about the fact that the input RGBA for the stack context has no default value. We can assign one by going to our Parameters tab and clicking on the StackContext RGBA and adjusting its DefaultMode to Value instead of Fail if not previously set.

Also, once again we have to go back to our emitter and setup the inputs properly before we move on.

With this we can jump back into the WriteToGrid module and add some simple code to the Custom HLSL for testing.

OutRGBA = InGridLocation.x;

This code is mimicking our test from before so the output result looks exactly the same if the HLSL node is functioning properly!

Now we can finally enter in the code we need 

The Finished HLSL Code

Good test settings for this demo are setting RadiusScale to 2.8 and WorldScale to 6.

Then for our custom HLSL we can enter the final code in.

int Out_Num;
InLocationsAndSizes.Length(Out_Num);

float4 NewGridValue = 0;

for(int Index = 0; Index <= Out_Num; Index++)
{
	float4 Out_Value;
    InLocationsAndSizes.Get(Index, Out_Value);
    const float Distance = length(Out_Value - InGridLocation);
    if(Distance < (Out_Value.w * InRadiusScale))
    {
         const float Falloff = (1 - (Distance/(Out_Value.w * InRadiusScale)));
         NewGridValue = NewGridValue + Falloff;
    }
}
OutRGBA = lerp(InRGBA, NewGridValue, InDeltaTime * 5);
  1. First we get the length of the array for LocationsAndSizes and store that into a value called Out_Num.
  1. Next we create a new float4 value called NewGridValue and assign it to a default value of 0.

This is what will be evaluated and added to during the for loop.

  1. Then we setup the for loop to iterate through the length of the LocationsAndSizes
  1. Then we get the value based on the index value of the for loop. This value is a float4 called Out_Value.

OutValue’s XYZ is the location and w is the Radius.

  1. Next, we calculate the distance by subtracting the 2 locations and getting the length of the vector afterwards. 
  1. We then calculate a smooth falloff using the current distance value and the radius of the circle * the multiplier value.
  1. Then we add the falloff value on top of the NewGridValue. This allows multiple circles to stack on top of each other additively.
  1. And finally we Interpolate between the previous frame’s RGBA and our current NewGridValue to get a smooth output result. The multiplier on InDeltaTime is how strong it should blend in the new frame on top of the old one.

The output result is that the spheres should now be blending smoothly as they get closer to the plane. 

That concludes setting up the basics of this method to give you something you can drive many different things off of. I hope this helps and am excited to see what other folks can do with this method!

Taking this further.

For Mix Universe, I also have a FogAdjustments and FogColors array. The colors come in and show up when the nodes play, the sizes also adjust slightly during this moment allowing for the fog to shrink and grow.

Hope this helps!

And that is it! I hope this is useful and please check out Mix Universe if you get the chance and are interested in learning more about the project or seeing the fog in action. 

If you are interested in learning more about Grid2D and simple fluid sims, check out Partikel’s Grid2D tutorials as these are what I used to get started with the basics!

Grid2D Quickstart
https://www.youtube.com/watch?v=XVKpofOj44c
Grid2D Advection
https://www.youtube.com/watch?v=4NxBonHkyNg

Hope you have a good day!

Mix Universe: Playtest Update 3

Mix Universe: Playtest Update 3

Mix Universe is still trucking along! You can check out the Mix Universe website for more details on the game!

What’s new and what is fixed?

  • Patterns are upgraded now to allow them to chain when they aren’t set to looping.
  • Patterns more clearly have start and end points indicated by spherical nodes that you can connect to for activation.
  • Patterns now have randomization controls to allow for mixes to have much more variety
  • All Nodes can adjust their height using Q and E which allows for a lot more visual flexibility when laying out mixes.
  • New Rock Visualizer Node – Similar to the planet node, instead its a rock.
  • New End Point Node – Stops the mix and shows stats on it.
  • Every Mix now has an “Origin” layer which you can access in the lower left section of the edit panel. This layer controls which layers 1-6 are activated by default at the start of the mix.
  • Layer Switchers now show visually which layer they are set to switch to.
  • Headbob’s character model has been updated.
  • The default space theme’s lighting and fog has been updated and particles for the main atmosphere have been fixed.
  • Multi-Edit double click functionality has been added for Samples, Modulators, Link Duration, Note Duration, Note Octave, and Note Value.
  • Onscreen UI limit has been increased from 10 to 32 to allow for up to 32 nodes to edit at once.
  • Simple edit mode has been added which displays a much simpler “Selected” UI.
  • Added mix history when mixes are uploaded to the store, now if someone is to save a mix off of someone elses mix, we can see where the original mix came from and who made it.
  • Added a profile name you can change in the options menu if you don’t want to use your steam name for artist info on mixes.
  • Added a mix artist overlay which shows title and artist names.
  • Improved CPU performance when it comes to elements in the scene, still have a ways to go though for much mega mixes.
  • Fixed issues with UI’s getting stuck during complex edits or longer mix sessions.

Not too much longer now!

I will be gearing up for an early access launch February 2023. This means that the playtest will be closed to any new players automatically in 2 weeks! Please jump in now if you wanna give feedback. Otherwise, I’ll see you on early access launch day (To be announced)

Shoutout to the community so far! Here are some cool mixes that players have made so far!

UE5 MetaSound Grain Delay and Procedural Study

UE5 MetaSound Grain Delay and Procedural Study

The MetaSound that I will be talking about is in this video.

Having fun with the procedural nodes and grain delay.

In short, my goal for this was to try out some newer 5.1 quality of life improvements for MetaSound and the grain delay node.

What is MetaSound? A DAW?


This link says it best.

“A DAW and a game audio engine serve totally different needs and have opposed technical constraints and use cases. A game audio engine is about building audio experiences with interactivity and runtime procedural generation as the core focus. “

MetaSound is Unreal’s new audio plugin in UE5 aiming to elevate functionality usually required by sound designers for games.

https://dev.epicgames.com/community/learning/talks-and-demos/eXmE/unreal-engine-unreal-audio-engine-unreal-fest-2022-and-gamesoundcon-2022-summaries-and-faq

Important Resources I used to get things going.


Here are 2 videos that will be really helpful in understanding what is going on here.

Quick Start
Kick Drum Synth

This Study’s MetaSound is free to download and tinker with below!


Key things this MetaSound above can do.


  1. Uses an alterable scale to determine the chord choices
  2. Uses bool arrays to determine the beat
  3. Generates a Lead, Kick, Snare, Hi Hat, and Pad Chords
  4. Routes everything through multiple FX such as filtering and Grain delay.

You can try this MetaSound out for yourself by downloading from gumroad!

https://chriszuko.gumroad.com/l/awgbr

Quick Grain Delay Notes


The grain delay is a bit of a wild node. The biggest hurdle is giving it some audio and a trigger.

You must have something plugged into grain spawn for it to do anything. This is what tells it to grab a chunk from the audio that is going through it to use for the delay.

I ended up taking this pulse and delaying it further to offset when it decides to spawn grains.

Pulse is happening every beat.

This gave me a wild ethereal type reverb effect. For the audio going in, I needed to tune that a bit more.

I am filtering and compressing the audio in order to reduce artifacts that can happen when a sound gets a bit too loud inside of the grain delay. Some whacky things can happen.

I also realized quickly that transients from a kick, hi-hat, or snare weren’t going to work well for the kind of effect I was going for. It sounded jarring, so I made sure that the only audio going into the grain delay was the pad and the lead instruments.

The entire Grain delay setup looks like this (Again you can download this metasound above if you wanna poke around in it more)

I know this might seem overwhelming, but a lot of what I was doing was straight up experimenting, so I encourage you to try something out yourself and tinker with the MetaSound on your own!

Project Mix is now Mix Universe!

Project Mix is now Mix Universe!

Previously, I talked through the entire process here to get to a demo video that was released about a year ago showing what I was calling at the time “Project Mix”. This has now turned into a full musical sandbox game that I am calling Mix Universe!

Link to that article. https://chriszuko.com/project-mix-current-progress/

What is Mix Universe?


A musical sandbox like no other where you connect nodes together to make a truly spectacular audio visual experience.

I started this thing with very simple beginnings and I am excited to talk about that more in-depth in the future. This post is mostly just to get things up to date here on my main site.

Mix Universe Early Playtest Release Trailer

Heading towards an early access launch!


I want to get this thing out there in early access so folks can enjoy it without worrying about features completely destroying everything they’ve made. Early access launch will allow this.

Mix Universe Early Access Teaser Trailer

Some fun screenshots

Thanks to everyone who has supported me along the way! I am excited for what the future may hold here and am glad to have made it this far in development!