Most of the examples don't explain how it actually works, what you
can do and what you can't do.
And a lot of things are not documented yet and it is not obvious how it works.
So I want to explain how it works: Working with CompositorEffects is basically only possible, when you understand the commands and workflow of
Godot's RenderingDevice implementation.
THE RENDERING DEVICE
Most commands in the examples store somewhere a
RenderingDevice rd = RenderingServer.GetRenderingDevice();
and call methods to create, destroy or manipulate objects that are needed on the graphics card.
var computeList = rd.ComputeListBegin();
rd.ComputeListBindComputePipeline( computeList, pipeline );
rd.ComputeListBindUniformSet( computeList, uniformSet, 0);
rd.ComputeListSetPushConstant( computeList, pushConstants, pcSize );
rd.ComputeListDispatch( computeList, xGroups, yGroups, zGroups );
rd.ComputeListEnd();
A RenderingDevice that executes a compute shader by passing shaders (pipeline),
uniforms/push constants with the dimensions of xGroups/yGroups
This RenderingDevice is Godot's low level API for rendering things on different platforms with different backends.
Typical for raw wrappers of low level APIs, it has no objects, but works with pointers
called RID that you happily pass to long method names, often
prefixed with an object name: For ComputeList its ComputeListBegin, ComputeListDispatch etc.
This way you get all the fun of organizing and managing the objects on the graphics card yourself.
For a developer it means you need to manually create and destroy all graphics card objects.
If a texture is not used anymore, you are responsible for deleting it!
But, since I don't want to debug whether I called the correct order and lines to create a texture, sampler, shader or pipeline,
I created a wrapper for Godot's RenderingDevice named RDContext
SOME RD CONTEXT
To make it maintainable and to simplify working with RenderingDevice the
RDContext was born.
It wraps the low level instructions and ensures everything is correctly setup.
I will make a tutorial that will solely focus on it, too, but this will be in the future.
A thing to note is: The RDContext (and the RenderingDevice, too) are not exclusive to CompositorEffects.
They can be used for doing custom rendering stuff, like creating a second device in parallel
to compute other import things.
Instead of passing RIDs RDContext uses wrappers for objects that live on the GPU and associate them with their RID, so that
you deal with real objects. Some examples are
RDTexture,
RDSampler,
RDUniformSet,
RDComputeList.
For each object that is created, the RDContext registers objects, that need to be cleaned up.
This way a RDContext can automatically clean up itself, when it's no longer needed.
rdContext.CleanUp();
Additionally, RDContext has a message system, writing and caching error messages. In verbose mode,
you can track down errors easily, since it is documenting every operation.
The introduction of the RDContext is important, because all RokojoriCompositorEffects use it and I
strongly encourage you to create your own layer, when you want to write something on your own.
So, back to CompositorEffects...
COMPOSITOR EFFECT FLOW
Before I show the complex, good stuff, I want to explain the basic concept of the CompositorEffect without
RDContext.
The example of the flow uses a simplified version of the GreyScale CompositorEffect from above. Please check the files from above,
if you want to look at a working version.
CONSTRUCTION
A CompositorEffect needs to attach itself during its constructor to the RenderingServer
and must initialize.
You can assign the EffectCallbackType, but keep in mind this can be changed in the editor.
It's easy to forget that you need to do this in the constructor
or else all objects will be uninitialized and the RenderCallback throws waves of errors in the console.
INITIALIZATION
In the initialization phase, all relevant objects get compiled or created: Shaders, textures, pipelines, samplers etc.
For one single compute shader pass, you compile a shader loaded from a file:
First to SPIR-V (an intermediate, binary shader format) and than create a shader object with it.
This shader object is passed to the compute shader pipeline creation method and gives you a pipeline back.
In this phase errors can appear, most likely a missing file or a wrong configuration.
Also here, when an error happens, the RenderCallback does not care and happily generates
waves of errors in the console.
public void Initialize()
{
var shaderPath = "res://compositor-shader.glsl";
var shaderFile = GD.Load<RDShaderFile>( shaderPath );
var shaderSpirv = shaderFile.GetSpirV();
shader = rd.ShaderCreateFromSpirV( shaderSpirv );
if ( shader.IsValid )
{
pipeline = rd.ComputePipelineCreate( shader );
}
}
RENDERING
In the rendering phase, which happens every frame, all variable properties like
uniform and push constants are collected
and assigned. Then, the compute shader is run for all views (in VR you have multiple).
Most examples do a million null and validity checks here. I removed them all for clarity.
But, if you have errors: waves of it in the console.
public override void _RenderCallback( int t, RenderData d )
{
var renderSceneBuffers =
( RenderSceneBuffersRD ) d.GetRenderSceneBuffers();
PrepareUniformSetsAndPushConstants();
int views = (int) renderSceneBuffers.GetViewCount();
for ( var i = 0; i < views; i++)
{
var view = (uint) i;
Rid inputImage = renderSceneBuffers.GetColorLayer( view );
var uniform = new RDUniform();
uniform.UniformType = RenderingDevice.UniformType.Image;
uniform.Binding = 0;
uniform.AddId( inputImage );
var uniformSet = CreateUniformSet( uniform );
ProcessComputeList();
}
}
RDGRAPH
Since this very simplified example is still already complicated, I needed something that would also take
away the pain of setting things up and assigning them: RDGraph.
The reason is, that even simple effects like blur, often require more operations than one shader.
Effects like blur need a copy of the screen buffer, else they would concurrently read and write to the image.
Also blurs can be implemented in multiple stages, which would also require some code to ping-pong texture targets.
For this tasks, RDGraph takes components from RDContext and allows them to connect them as a graph with processing nodes.
This makes reusing filters/processors/shaders easier and in a way, where they can be connected easily.
Again RDGraph can not only be used in the context of CompositorEffects, but is a tool for using the RenderingDevice.
It automatically does all the configuration for CompositorEffects and allows to setup a graph, that will be used to
render the effect.
This allows to create classes like RG_Copy, that copies an image to another image or RG_Blur, which blurs an input image
and writes it to an output image. Very similar to node systems like the visual node editor.
This is important to know, because the majority of effects of the Rokojori Action Library use the RDGraph to compose
the effects with multiple smaller, reusable components.
OUTLINES EFFECT EXAMPLE
So, let me show you how the DepthOutlinesEffect works.
You can either look at the code in the repository
or press the button to show the code on the page. The explanation continues below.
Show CodeHide Code
using Godot;
namespace Rokojori
{
[Tool]
[GlobalClass]
public partial class DepthOutlinesEffect:EdgeCompositorEffect
{
// INITIALIZE IN CONSTRUCTOR!
public DepthOutlinesEffect():base()
{
Initialize();
}
// UI PARAMETERS
[ExportGroup( "Main")]
[Export( PropertyHint.Range, "0,1") ]
public float amount = 1f;
[Export( PropertyHint.Range, "-1,1") ]
public float outlineWidth = 1f;
[Export]
public CurveTexture outlineWidthCurve =
new CurveTexture().WithCurve(
new Curve().WithValues( 0.5f, 0.5f )
);
[Export]
public Color edgeColor = Colors.Black;
[Export( PropertyHint.Range, "0,1") ]
public float edgeDistanceFade = 0.2f;
[Export]
public Color fillColor = new Color( 1.0f, 1.0f, 1.0f, 0.0f );
[Export]
public Vector2 rimOffset = Vector2.Zero;
[Export]
public float rimContrast = 1.0f;
[Export( PropertyHint.Range, "0,1") ]
public float rimStrength = 1.0f;
[Export( PropertyHint.Range, "0,1") ]
public float zEdgeAmount = 1f;
[Export( PropertyHint.Range, "0,1") ]
public float normalEdgeAmount = 1f;
[Export( PropertyHint.Range, "0,1") ]
public float normalEdgeAmountMin = 0.05f;
[Export( PropertyHint.Range, "0,1") ]
public float normalEdgeAmountMax = 0.15f;
[Export]
public float zTreshold = 0.1f;
[Export]
public CurveTexture zTresholdCurve =
new CurveTexture().WithCurve(
new Curve().WithValues( 1, 1 )
);
[Export]
public float edgeIntensity = 1f;
[Export]
public CurveTexture edgeIntensityCurve =
new CurveTexture().WithCurve(
new Curve().WithValues( 1, 1 )
);
[Export( PropertyHint.Range, "0,1") ]
public float adaptiveScaleAmount = 0.5f;
[Export]
public float adaptiveScaleNormalizer = 1f;
[Export ]
public Vector2 zInput = new Vector2( 0.1f, 4000f );
[Export ]
public Vector2 zOutput = new Vector2( 0f, 1f );
// GRAPH NODES
RG_ScreenColorTexure screenColorTexture;
RG_ScreenDepthTexture screenDepthTexture;
RG_ScreenNormalRoughnessTexture screenNormalRoughnessTexture;
RG_BufferTexture bufferTexture;
RG_ImageTexture zTresholdTexture;
RG_ImageTexture edgeIntensityTexture;
RG_ImageTexture outlineWidthTexture;
RG_GenerateViewZ generateViewZ;
RG_ZOutlines zOutlines;
void Initialize()
{
screenColorTexture = new RG_ScreenColorTexure( graph );
screenDepthTexture = new RG_ScreenDepthTexture( graph );
screenNormalRoughnessTexture =
new RG_ScreenNormalRoughnessTexture( graph );
bufferTexture = RG_BufferTexture.ScreenSize( graph );
zTresholdTexture = new RG_ImageTexture( graph );
edgeIntensityTexture = new RG_ImageTexture( graph );
outlineWidthTexture = new RG_ImageTexture( graph );
generateViewZ = new RG_GenerateViewZ( graph );
zOutlines = new RG_ZOutlines( graph );
graph.InitializeNodes();
generateViewZ.SetTextureSlotInputs(
screenDepthTexture, bufferTexture );
generateViewZ.input.UseLinearSamplerEdgeClamped();
zOutlines.SetTextureSlotInputs( screenColorTexture, screenColorTexture );
zOutlines.input.UseLinearSampler();
zOutlines.AddTextureSlotInput( bufferTexture )
.UseLinearSamplerEdgeClamped();
zOutlines.AddTextureSlotInput( zTresholdTexture )
.UseLinearSamplerEdgeClamped();
zOutlines.AddTextureSlotInput( edgeIntensityTexture )
.UseLinearSamplerEdgeClamped();
zOutlines.AddTextureSlotInput( screenNormalRoughnessTexture )
.UseLinearSamplerEdgeClamped();
zOutlines.AddTextureSlotInput( outlineWidthTexture )
.UseLinearSamplerEdgeClamped();
graph.SetProcessOrder(
screenColorTexture,
screenDepthTexture,
screenNormalRoughnessTexture,
bufferTexture,
zTresholdTexture,
edgeIntensityTexture,
generateViewZ,
zOutlines
);
}
protected override void ForAllViews()
{
zTresholdTexture.SetImageTexture( zTresholdCurve );
edgeIntensityTexture.SetImageTexture( edgeIntensityCurve );
outlineWidthTexture.SetImageTexture( outlineWidthCurve );
var projection = context.GetCameraProjection().Inverse();
generateViewZ.constants.Set(
projection.X,
projection.Y,
projection.Z,
projection.W,
zInput.X,
zInput.Y,
zOutput.X,
zOutput.Y
);
zOutlines.constants.Set(
amount * edgeColor.A,
edgeColor.R,
edgeColor.G,
edgeColor.B,
zInput.X,
zInput.Y,
zTreshold,
edgeIntensity,
adaptiveScaleAmount,
adaptiveScaleNormalizer,
zEdgeAmount,
normalEdgeAmount,
normalEdgeAmountMin,
normalEdgeAmountMax,
rimOffset.X,
rimOffset.Y,
fillColor,
rimContrast,
rimStrength,
Mathf.Pow( 10f, outlineWidth ),
edgeDistanceFade
);
}
}
}
Hide Code
OUTLINES EXAMPLE DETAILS
The example creates a couple of RDGraph nodes, initializes and connects them,
and finally sets up an graph process order.
It takes advantage of RDGraphCompositorEffect, which creates the RDGraph in the constructor.
The nodes that are used, can be put in two categories: Data nodes and process nodes.
While data nodes prepare or ensure data (mainly textures), process nodes take other nodes
and manipulate them.
The effect uses some of the most important data nodes for textures.
RG_ScreenColorTexture
Resolves the screen color texture of the current view
RG_ScreenDepthTexture
Resolves the screen depth texture of the current view
RG_ScreenNormalRoughnessTexture
Resolves the screen normal roughness texture of the current view
RG_BufferTexture
Creates a new texture that can be written to or read from. This is assigned
with a RG_TextureCreator, which can handle dynamic or fixed sizes automatically (like ScreenSize).
RG_ImageTexture
Creates a texture that can be assigned from an external Texture2D
OUTLINES PROCESSING DETAILS
The mentioned nodes from above are all data nodes, which don't do any processing.
They just ensure that the right textures are available at the correct time.
The actual processing nodes are:
RG_GenerateViewZ
This takes a depth texture and buffer texture, converts depth to view-z and writes it to the buffer texture.
RG_ZOutlines
This node takes a lot of textures, including the z texture, normal roughness texture and color texture
and writes back the computed outlines to the color texture.
RG_ImageProcessor is a node for RDGraph that uses a shader and at least one input and one output texture.
Additional textures can also be passed into it. It is the base for most process nodes.
Classes extending RD_ImageProcessors don't define anything beside the path of their shader, so it mainly is
boilerplate for the path of the shader.
The shader itself should at least define two slots for textures (image2D or sampler2D).
OUTLINES GENERATING Z
To get an idea how the process class for RG_GenerateViewZ is written, here's the full source:
public class RG_GenerateViewZ:RG_ImageProcessor
{
public static readonly string directory =
"Nodes/Processors/Depth/GenerateViewZ/";
public static readonly string name =
"GenerateViewZ.glsl";
public static readonly string shaderPath =
RDGraph.Path( directory + path );
public RG_GenerateViewZ( RDGraph graph ):
base( graph, shaderPath ){}
}
It's basically very short and shorter in real code base, since I added
variables so that I can wrap it to make it easier to read on the mobile webpage.
To use the node inside the CompositorEffect, the textures are assigned and for the input a sampler is created.
After the setup the order of execution is defined in the graph. Usually texture sources first and then the processors.
// ---- stuff ----
generateViewZ.SetTextureSlotInputs(
screenDepthTexture, bufferTexture );
generateViewZ.input.UseLinearSamplerEdgeClamped();
// ---- stuff ----
graph.SetProcessOrder(
screenColorTexture,
screenDepthTexture,
screenNormalRoughnessTexture,
bufferTexture,
zTresholdTexture,
edgeIntensityTexture,
generateViewZ,
zOutlines
);
The shader uses classic stuff, like an image2D that can only be accessed via integer coordinates ivec2 and
a sampler2D which samples an image with normalized coordinates.
Since the invocation of the compute shader does not align perfectly with the image size, we have to check
the bounds
The shader defines before its main function:
layout( push_constant, std430 )
uniform Parameters
{
vec4 m0;
vec4 m1;
vec4 m2;
vec4 m3;
float inputZMin;
float inputZMax;
float outputZMin;
float outputZMax;
} parameters;
The members inside that uniform are so called PushConstants, a very
fast way to transfer small data bits to the GPU.
In the ComputeShader they are used as parameters and are transferred
in the CompositorEffect every frame.
Be aware, that the order of the members of PushConstants is not arbitrary.
Every type needs to be aligned to its data size alignment.
So a vec4 must be aligned to 16 bytes and can't be the second member after a float.
To avoid misalignment, you can just start with bigger data types and use smaller data types later.
Assignment of PushConstants of GenerateViewZ. The RDContext has the right camera projection for the view.
You would usually unpack data from the RDRenderData.
protected override void ForAllViews()
{
// -- other stuff
var projection = context.GetCameraProjection().Inverse();
generateViewZ.constants.Set(
projection.X,
projection.Y,
projection.Z,
projection.W,
zInput.X,
zInput.Y,
zOutput.X,
zOutput.Y
);
// -- more other stuff
}
The effect takes the z-texture and uses a sobel filter to detect edges. These edges are weighted by
checking the normals in the center area. If they are mostly planar to the neighbors, edges will be discarded.
A lot of other things happen to adjust several settings based on the z-center, where I sample
the adjustment curves using the normalized center-z as UV coord and use them for filtering and rendering the edges (width/intensity etc).
And that's it. Thanks for hanging in! This is how CompositorEffects are written for the Rokojori Action Library.
I hope you found it useful for writing own CompositorEffects in Godot.
Anyway, don't hesitate for feedback on Discord, BlueSky or Mastodon.