NEWS COMMUNITY STORE TUTORIALS ROKOJORI ACTION LIBRARY ROKOJORI ACTION LIBRARY TUTORIALS SIGN UP LOGIN LOGOUT ROKOJORI NEWSLETTER SIGN UP LOGIN LOGOUT NEWS COMMUNITY STORE TUTORIALS ROKOJORI ACTION LIBRARY ROKOJORI ACTION LIBRARY TUTORIALS TOGGLE FULLSCREEN VOLLBILD AN/AUS image/svg+xml image/svg+xml image/svg+xml CUSTOM COMPOSITOR EFFECTS
How to write custom compositor effects
image/svg+xml image/svg+xml How to write custom compositor effects image/svg+xml






WRITING A COMPOSITOR EFFECT

This animation was done with multiple CompositorVFX

CompositorEffects examples on the web are often quite complex and use a lot of unstructured low level code.

If this your cup of tea (the raw low level instructions), have a look at the Greyscale CompositorEffect in C# or the equivalent Greyscale CompositorEffect in GDScript.

Most of the examples don't explain how it actually works, what you can do and what you can't do. And a lot of things are not documented yet and it is not obvious how it works.

So I want to explain how it works: Working with CompositorEffects is basically only possible, when you understand the commands and workflow of Godot's RenderingDevice implementation.




THE RENDERING DEVICE

Most commands in the examples store somewhere a RenderingDevice rd = RenderingServer.GetRenderingDevice();
and call methods to create, destroy or manipulate objects that are needed on the graphics card.

var computeList = rd.ComputeListBegin(); rd.ComputeListBindComputePipeline( computeList, pipeline ); rd.ComputeListBindUniformSet( computeList, uniformSet, 0); rd.ComputeListSetPushConstant( computeList, pushConstants, pcSize ); rd.ComputeListDispatch( computeList, xGroups, yGroups, zGroups ); rd.ComputeListEnd(); A RenderingDevice that executes a compute shader by passing shaders (pipeline), uniforms/push constants with the dimensions of xGroups/yGroups

This RenderingDevice is Godot's low level API for rendering things on different platforms with different backends.

Typical for raw wrappers of low level APIs, it has no objects, but works with pointers called RID that you happily pass to long method names, often prefixed with an object name: For ComputeList its ComputeListBegin, ComputeListDispatch etc.

This way you get all the fun of organizing and managing the objects on the graphics card yourself. For a developer it means you need to manually create and destroy all graphics card objects. If a texture is not used anymore, you are responsible for deleting it!

But, since I don't want to debug whether I called the correct order and lines to create a texture, sampler, shader or pipeline, I created a wrapper for Godot's RenderingDevice named RDContext




SOME RD CONTEXT

To make it maintainable and to simplify working with RenderingDevice the RDContext was born. It wraps the low level instructions and ensures everything is correctly setup.

I will make a tutorial that will solely focus on it, too, but this will be in the future.

A thing to note is: The RDContext (and the RenderingDevice, too) are not exclusive to CompositorEffects. They can be used for doing custom rendering stuff, like creating a second device in parallel to compute other import things.

Instead of passing RIDs RDContext uses wrappers for objects that live on the GPU and associate them with their RID, so that you deal with real objects. Some examples are RDTexture, RDSampler, RDUniformSet, RDComputeList.

For each object that is created, the RDContext registers objects, that need to be cleaned up. This way a RDContext can automatically clean up itself, when it's no longer needed.

rdContext.CleanUp();

Additionally, RDContext has a message system, writing and caching error messages. In verbose mode, you can track down errors easily, since it is documenting every operation.

The introduction of the RDContext is important, because all RokojoriCompositorEffects use it and I strongly encourage you to create your own layer, when you want to write something on your own. So, back to CompositorEffects...




COMPOSITOR EFFECT FLOW

Before I show the complex, good stuff, I want to explain the basic concept of the CompositorEffect without RDContext.

The example of the flow uses a simplified version of the GreyScale CompositorEffect from above. Please check the files from above, if you want to look at a working version.


CONSTRUCTION
A CompositorEffect needs to attach itself during its constructor to the RenderingServer and must initialize.

You can assign the EffectCallbackType, but keep in mind this can be changed in the editor.

It's easy to forget that you need to do this in the constructor or else all objects will be uninitialized and the RenderCallback throws waves of errors in the console.

public MyCompositorEffect() { EffectCallbackType = EffectCallbackTypeEnum.PostTransparent; rd = RenderingServer.GetRenderingDevice(); RenderingServer.CallOnRenderThread( Callable.From( Initialize() ) ); }


INITIALIZATION
In the initialization phase, all relevant objects get compiled or created: Shaders, textures, pipelines, samplers etc.

For one single compute shader pass, you compile a shader loaded from a file: First to SPIR-V (an intermediate, binary shader format) and than create a shader object with it. This shader object is passed to the compute shader pipeline creation method and gives you a pipeline back.

In this phase errors can appear, most likely a missing file or a wrong configuration. Also here, when an error happens, the RenderCallback does not care and happily generates waves of errors in the console.

public void Initialize() { var shaderPath = "res://compositor-shader.glsl"; var shaderFile = GD.Load<RDShaderFile>( shaderPath ); var shaderSpirv = shaderFile.GetSpirV(); shader = rd.ShaderCreateFromSpirV( shaderSpirv ); if ( shader.IsValid ) { pipeline = rd.ComputePipelineCreate( shader ); } }


RENDERING
In the rendering phase, which happens every frame, all variable properties like uniform and push constants are collected and assigned. Then, the compute shader is run for all views (in VR you have multiple).

Most examples do a million null and validity checks here. I removed them all for clarity. But, if you have errors: waves of it in the console.

public override void _RenderCallback( int t, RenderData d ) { var renderSceneBuffers = ( RenderSceneBuffersRD ) d.GetRenderSceneBuffers(); PrepareUniformSetsAndPushConstants(); int views = (int) renderSceneBuffers.GetViewCount(); for ( var i = 0; i < views; i++) { var view = (uint) i; Rid inputImage = renderSceneBuffers.GetColorLayer( view ); var uniform = new RDUniform(); uniform.UniformType = RenderingDevice.UniformType.Image; uniform.Binding = 0; uniform.AddId( inputImage ); var uniformSet = CreateUniformSet( uniform ); ProcessComputeList(); } }




RDGRAPH

Since this very simplified example is still already complicated, I needed something that would also take away the pain of setting things up and assigning them: RDGraph.

The reason is, that even simple effects like blur, often require more operations than one shader. Effects like blur need a copy of the screen buffer, else they would concurrently read and write to the image. Also blurs can be implemented in multiple stages, which would also require some code to ping-pong texture targets.

For this tasks, RDGraph takes components from RDContext and allows them to connect them as a graph with processing nodes. This makes reusing filters/processors/shaders easier and in a way, where they can be connected easily.

Again RDGraph can not only be used in the context of CompositorEffects, but is a tool for using the RenderingDevice.

And to make it streamlined, I created the RDGraphCompositorEffect.

It automatically does all the configuration for CompositorEffects and allows to setup a graph, that will be used to render the effect.

This allows to create classes like RG_Copy, that copies an image to another image or RG_Blur, which blurs an input image and writes it to an output image. Very similar to node systems like the visual node editor.

This is important to know, because the majority of effects of the Rokojori Action Library use the RDGraph to compose the effects with multiple smaller, reusable components.



OUTLINES EFFECT EXAMPLE So, let me show you how the DepthOutlinesEffect works. You can either look at the code in the repository or press the button to show the code on the page. The explanation continues below. Show Code Hide Code
using Godot; namespace Rokojori { [Tool] [GlobalClass] public partial class DepthOutlinesEffect:EdgeCompositorEffect { // INITIALIZE IN CONSTRUCTOR! public DepthOutlinesEffect():base() { Initialize(); } // UI PARAMETERS [ExportGroup( "Main")] [Export( PropertyHint.Range, "0,1") ] public float amount = 1f; [Export( PropertyHint.Range, "-1,1") ] public float outlineWidth = 1f; [Export] public CurveTexture outlineWidthCurve = new CurveTexture().WithCurve( new Curve().WithValues( 0.5f, 0.5f ) ); [Export] public Color edgeColor = Colors.Black; [Export( PropertyHint.Range, "0,1") ] public float edgeDistanceFade = 0.2f; [Export] public Color fillColor = new Color( 1.0f, 1.0f, 1.0f, 0.0f ); [Export] public Vector2 rimOffset = Vector2.Zero; [Export] public float rimContrast = 1.0f; [Export( PropertyHint.Range, "0,1") ] public float rimStrength = 1.0f; [Export( PropertyHint.Range, "0,1") ] public float zEdgeAmount = 1f; [Export( PropertyHint.Range, "0,1") ] public float normalEdgeAmount = 1f; [Export( PropertyHint.Range, "0,1") ] public float normalEdgeAmountMin = 0.05f; [Export( PropertyHint.Range, "0,1") ] public float normalEdgeAmountMax = 0.15f; [Export] public float zTreshold = 0.1f; [Export] public CurveTexture zTresholdCurve = new CurveTexture().WithCurve( new Curve().WithValues( 1, 1 ) ); [Export] public float edgeIntensity = 1f; [Export] public CurveTexture edgeIntensityCurve = new CurveTexture().WithCurve( new Curve().WithValues( 1, 1 ) ); [Export( PropertyHint.Range, "0,1") ] public float adaptiveScaleAmount = 0.5f; [Export] public float adaptiveScaleNormalizer = 1f; [Export ] public Vector2 zInput = new Vector2( 0.1f, 4000f ); [Export ] public Vector2 zOutput = new Vector2( 0f, 1f ); // GRAPH NODES RG_ScreenColorTexure screenColorTexture; RG_ScreenDepthTexture screenDepthTexture; RG_ScreenNormalRoughnessTexture screenNormalRoughnessTexture; RG_BufferTexture bufferTexture; RG_ImageTexture zTresholdTexture; RG_ImageTexture edgeIntensityTexture; RG_ImageTexture outlineWidthTexture; RG_GenerateViewZ generateViewZ; RG_ZOutlines zOutlines; void Initialize() { screenColorTexture = new RG_ScreenColorTexure( graph ); screenDepthTexture = new RG_ScreenDepthTexture( graph ); screenNormalRoughnessTexture = new RG_ScreenNormalRoughnessTexture( graph ); bufferTexture = RG_BufferTexture.ScreenSize( graph ); zTresholdTexture = new RG_ImageTexture( graph ); edgeIntensityTexture = new RG_ImageTexture( graph ); outlineWidthTexture = new RG_ImageTexture( graph ); generateViewZ = new RG_GenerateViewZ( graph ); zOutlines = new RG_ZOutlines( graph ); graph.InitializeNodes(); generateViewZ.SetTextureSlotInputs( screenDepthTexture, bufferTexture ); generateViewZ.input.UseLinearSamplerEdgeClamped(); zOutlines.SetTextureSlotInputs( screenColorTexture, screenColorTexture ); zOutlines.input.UseLinearSampler(); zOutlines.AddTextureSlotInput( bufferTexture ) .UseLinearSamplerEdgeClamped(); zOutlines.AddTextureSlotInput( zTresholdTexture ) .UseLinearSamplerEdgeClamped(); zOutlines.AddTextureSlotInput( edgeIntensityTexture ) .UseLinearSamplerEdgeClamped(); zOutlines.AddTextureSlotInput( screenNormalRoughnessTexture ) .UseLinearSamplerEdgeClamped(); zOutlines.AddTextureSlotInput( outlineWidthTexture ) .UseLinearSamplerEdgeClamped(); graph.SetProcessOrder( screenColorTexture, screenDepthTexture, screenNormalRoughnessTexture, bufferTexture, zTresholdTexture, edgeIntensityTexture, generateViewZ, zOutlines ); } protected override void ForAllViews() { zTresholdTexture.SetImageTexture( zTresholdCurve ); edgeIntensityTexture.SetImageTexture( edgeIntensityCurve ); outlineWidthTexture.SetImageTexture( outlineWidthCurve ); var projection = context.GetCameraProjection().Inverse(); generateViewZ.constants.Set( projection.X, projection.Y, projection.Z, projection.W, zInput.X, zInput.Y, zOutput.X, zOutput.Y ); zOutlines.constants.Set( amount * edgeColor.A, edgeColor.R, edgeColor.G, edgeColor.B, zInput.X, zInput.Y, zTreshold, edgeIntensity, adaptiveScaleAmount, adaptiveScaleNormalizer, zEdgeAmount, normalEdgeAmount, normalEdgeAmountMin, normalEdgeAmountMax, rimOffset.X, rimOffset.Y, fillColor, rimContrast, rimStrength, Mathf.Pow( 10f, outlineWidth ), edgeDistanceFade ); } } }
Hide Code



OUTLINES EXAMPLE DETAILS

The example creates a couple of RDGraph nodes, initializes and connects them, and finally sets up an graph process order.

It takes advantage of RDGraphCompositorEffect, which creates the RDGraph in the constructor.

The nodes that are used, can be put in two categories: Data nodes and process nodes. While data nodes prepare or ensure data (mainly textures), process nodes take other nodes and manipulate them.

The effect uses some of the most important data nodes for textures.

RG_ScreenColorTexture
Resolves the screen color texture of the current view

RG_ScreenDepthTexture
Resolves the screen depth texture of the current view

RG_ScreenNormalRoughnessTexture
Resolves the screen normal roughness texture of the current view

RG_BufferTexture
Creates a new texture that can be written to or read from. This is assigned with a RG_TextureCreator, which can handle dynamic or fixed sizes automatically (like ScreenSize).

RG_ImageTexture
Creates a texture that can be assigned from an external Texture2D



OUTLINES PROCESSING DETAILS

The mentioned nodes from above are all data nodes, which don't do any processing. They just ensure that the right textures are available at the correct time.

The actual processing nodes are:

RG_GenerateViewZ
This takes a depth texture and buffer texture, converts depth to view-z and writes it to the buffer texture.

RG_ZOutlines
This node takes a lot of textures, including the z texture, normal roughness texture and color texture and writes back the computed outlines to the color texture.

Both processors, RG_GenerateViewZ and RG_ZOutlines, extend RG_ImageProcessor.

RG_ImageProcessor is a node for RDGraph that uses a shader and at least one input and one output texture. Additional textures can also be passed into it. It is the base for most process nodes.

Classes extending RD_ImageProcessors don't define anything beside the path of their shader, so it mainly is boilerplate for the path of the shader.

The shader itself should at least define two slots for textures (image2D or sampler2D).



OUTLINES GENERATING Z

To get an idea how the process class for RG_GenerateViewZ is written, here's the full source: public class RG_GenerateViewZ:RG_ImageProcessor { public static readonly string directory = "Nodes/Processors/Depth/GenerateViewZ/"; public static readonly string name = "GenerateViewZ.glsl"; public static readonly string shaderPath = RDGraph.Path( directory + path ); public RG_GenerateViewZ( RDGraph graph ): base( graph, shaderPath ){} } It's basically very short and shorter in real code base, since I added variables so that I can wrap it to make it easier to read on the mobile webpage.

To use the node inside the CompositorEffect, the textures are assigned and for the input a sampler is created. After the setup the order of execution is defined in the graph. Usually texture sources first and then the processors. // ---- stuff ---- generateViewZ.SetTextureSlotInputs( screenDepthTexture, bufferTexture ); generateViewZ.input.UseLinearSamplerEdgeClamped(); // ---- stuff ---- graph.SetProcessOrder( screenColorTexture, screenDepthTexture, screenNormalRoughnessTexture, bufferTexture, zTresholdTexture, edgeIntensityTexture, generateViewZ, zOutlines );

The shader uses classic stuff, like an image2D that can only be accessed via integer coordinates ivec2 and a sampler2D which samples an image with normalized coordinates.

Since the invocation of the compute shader does not align perfectly with the image size, we have to check the bounds

// Includes for screenToView()/mapClamped() // Size of workgroup, needed for compute size layout( local_size_x = 8, local_size_y = 8, local_size_z = 1 ) in; // Uniform Set 0 layout( set = 0, binding = 0 ) uniform sampler2D depthSampler; // Uniform Set 1 layout( rgba16, set = 1, binding = 0 ) uniform restrict writeonly image2D outputImage; // Push Constants layout( push_constant, std430 ) uniform Parameters { vec4 m0; vec4 m1; vec4 m2; vec4 m3; float inputZMin; float inputZMax; float outputZMin; float outputZMax; } parameters; float getZ( vec2 uv, float depth, mat4 INV_PROJ ) { vec4 position = screenToView( uv, depth, INV_PROJ ); return max( 0.0, -position.z ); } // Main Compute Function void main() { ivec2 size = imageSize( outputImage ); ivec2 xy = ivec2( gl_GlobalInvocationID.xy ); if ( any( greaterThanEqual( xy, size ) ) ) { return; } vec2 uv = ( vec2( xy ) + vec2( 0.5 ) ) / vec2( size ); mat4 INV_PROJ = mat4( parameters.m0, parameters.m1, parameters.m2, parameters.m3 ); float depth = texture( depthSampler, uv ).r; float z = getZ( uv, depth, INV_PROJ ); float mappedZ = mapClamped( z, parameters.inputZMin, parameters.inputZMax, parameters.outputZMin, parameters.outputZMax ); vec4 color = vec4( mappedZ, mappedZ, mappedZ, 1.0 ); imageStore( outputImage, xy, color ); }



PUSH CONSTANTS

The shader defines before its main function: layout( push_constant, std430 ) uniform Parameters { vec4 m0; vec4 m1; vec4 m2; vec4 m3; float inputZMin; float inputZMax; float outputZMin; float outputZMax; } parameters; The members inside that uniform are so called PushConstants, a very fast way to transfer small data bits to the GPU.

In the ComputeShader they are used as parameters and are transferred in the CompositorEffect every frame.

Be aware, that the order of the members of PushConstants is not arbitrary. Every type needs to be aligned to its data size alignment. So a vec4 must be aligned to 16 bytes and can't be the second member after a float.

To avoid misalignment, you can just start with bigger data types and use smaller data types later.

Assignment of PushConstants of GenerateViewZ. The RDContext has the right camera projection for the view. You would usually unpack data from the RDRenderData.
protected override void ForAllViews() { // -- other stuff var projection = context.GetCameraProjection().Inverse(); generateViewZ.constants.Set( projection.X, projection.Y, projection.Z, projection.W, zInput.X, zInput.Y, zOutput.X, zOutput.Y ); // -- more other stuff }



CLOSING The rest of the effect, the ZOutlines shader, can be viewed in the repository or by pressing the button. Show Code Hide Code
#[compute] #version 450 layout( local_size_x = 8, local_size_y = 8, local_size_z = 1 ) in; layout( set = 0, binding = 0 ) uniform sampler2D inputImage; layout( rgba16f, set = 1, binding = 0 ) uniform image2D outputImage; layout( set = 2, binding = 0 ) uniform sampler2D zSampler; layout( set = 3, binding = 0 ) uniform sampler2D zTresholdSampler; layout( set = 4, binding = 0 ) uniform sampler2D edgeIntensitySampler; layout( set = 5, binding = 0 ) uniform sampler2D normalRoughnessSampler; layout( set = 6, binding = 0 ) uniform sampler2D outlineWidthSampler; layout( push_constant, std430 ) uniform Parameters { float amount; float edgeR; float edgeG; float edgeB; float zMin; float zMax; float edgeThreshold; float edgeIntensity; float adaptiveScaleAmount; float adaptiveScaleNormalizer; float zEdgeAmount; float normalEdgeAmount; float normalEdgeZMin; float normalEdgeZMax; float rimOffsetX; float rimOffsetY; float fillR; float fillG; float fillB; float fillA; float rimContrast; float rimStrength; float outlineWidth; float edgeDistanceFade; } parameters; float sampleNormalizedZ( vec2 uv, int x, int y, vec2 stepSize ) { return textureLod( zSampler, uv + vec2( float( x ), float( y ) ) * stepSize, 0 ).r; } float sampleZ( vec2 uv, int x, int y, vec2 stepSize ) { float z = sampleNormalizedZ( uv, x, y, stepSize ); return z * ( parameters.zMax - parameters.zMin ) + parameters.zMin; } vec3 sampleNormal( vec2 uv, int x, int y, vec2 stepSize ) { vec3 normalRGB = textureLod( normalRoughnessSampler, uv + vec2( float( x ), float( y ) ) * stepSize, 0 ).rgb; vec3 normalXYZ = normalRGB * 2.0 - vec3( 1.0 ); return normalize( normalXYZ ); } vec3 SRGBtoLINEAR( vec3 sRGB ) { return mix( pow( (sRGB + vec3( 0.055 )) * ( 1.0 / ( 1.0 + 0.055 )),vec3( 2.4 )),sRGB * ( 1.0 / 12.92 ),lessThan( sRGB,vec3( 0.04045 )) ); } vec4 SRGBtoLINEAR( vec4 sRGB ) { return vec4( SRGBtoLINEAR( sRGB.rgb ), sRGB.a ); } void main( ) { ivec2 size = imageSize( outputImage ); ivec2 xy = ivec2( gl_GlobalInvocationID.xy ); if ( any( greaterThanEqual( xy, size ) ) ) { return; } vec2 stepSize = vec2( 1.0 ) / vec2( size ); vec2 uv = ( vec2( xy ) + vec2( 0.5 ) ) * stepSize; // Sample center z distance float z11 = sampleNormalizedZ( uv, 0, 0, stepSize ); float zCenter = z11 * ( parameters.zMax * parameters.zMin ) + parameters.zMin; vec2 zUV = vec2( z11, 0.0 ); // Get outline width scale float outlineWidthCurve = pow( 10.0, textureLod( outlineWidthSampler, zUV, 0 ).r * 2.0 - 1.0 ); float outlineWidth = outlineWidthCurve * parameters.outlineWidth; // Sample Z distance for neighbors // 0 = left/top // 1 = center // 2 = right/bottom float z00 = sampleZ( uv, -1, -1, stepSize * outlineWidth ); float z10 = sampleZ( uv, 0, -1, stepSize * outlineWidth ); float z20 = sampleZ( uv, 1, -1, stepSize * outlineWidth ); float z01 = sampleZ( uv, -1, 0, stepSize * outlineWidth ); float z21 = sampleZ( uv, 1, 0, stepSize * outlineWidth ); float z02 = sampleZ( uv, -1, 1, stepSize * outlineWidth ); float z12 = sampleZ( uv, 0, 1, stepSize * outlineWidth ); float z22 = sampleZ( uv, 1, 1, stepSize * outlineWidth ); // Gradient of edge float gx = -z00 - 2.0 * z01 - z02 + z20 + 2.0 * z21 + z22; float gy = -z00 - 2.0 * z10 - z20 + z02 + 2.0 * z12 + z22; // Compute Rim vec2 rimOffset = vec2( parameters.rimOffsetX, parameters.rimOffsetY ); float rimZ = sampleZ( uv + rimOffset, 0, 0, stepSize ); float rimStrength = min( 1.0, max( 0.0, abs( zCenter - rimZ ) ) * parameters.rimContrast / zCenter ); rimStrength *= parameters.rimStrength; // Sample settings for threshold/intensity float zThresholdCurve = textureLod( zTresholdSampler, zUV, 0 ).r; float edgeIntensityCurve = textureLod( edgeIntensitySampler, zUV, 0 ).r; // Apply adaptive scale float adaptiveScale = zCenter / parameters.adaptiveScaleNormalizer; adaptiveScale = mix( 1.0, adaptiveScale, parameters.adaptiveScaleAmount ); // Compute raw edge float zEdge = length( vec2( gx, gy ) ); zEdge = ( zEdge - adaptiveScale * parameters.edgeThreshold * zThresholdCurve ) * parameters.edgeIntensity * edgeIntensityCurve; zEdge = clamp( zEdge, 0.0, 1.0 ); // Sample normals of neighbors vec3 n10 = sampleNormal( uv, 0, -1, stepSize * outlineWidth ); vec3 n01 = sampleNormal( uv, -1, 0, stepSize * outlineWidth ); vec3 n11 = sampleNormal( uv, 0, 0, stepSize * outlineWidth ); vec3 n21 = sampleNormal( uv, 1, 0, stepSize * outlineWidth ); vec3 n12 = sampleNormal( uv, 0, 1, stepSize * outlineWidth ); vec3 centerNormal = normalize( 0.5 * n11 + 0.5 * ( n10 + n01 + n21 + n12 ) / 4.0 ); float normalEdge = 0.0; // Combine normalEdge normalEdge = max( normalEdge, 1.0 - dot( n11, n01 ) ); normalEdge = max( normalEdge, 1.0 - dot( n11, n21 ) ); normalEdge = max( normalEdge, 1.0 - dot( n11, n10 ) ); normalEdge = max( normalEdge, 1.0 - dot( n11, n12 ) ); // Compute edge weight from normals float edgeWeight = smoothstep( parameters.normalEdgeZMin, parameters.normalEdgeZMax, normalEdge ); // Combine normal edge and weighted zEdge float edge = max( normalEdge * parameters.normalEdgeAmount, zEdge * parameters.zEdgeAmount * edgeWeight ); edge = clamp01( edge ); // Apply distance fade float edgeDistanceFade = mix( 1.0, 1.0 - z11, parameters.edgeDistanceFade ); edge *= edgeDistanceFade; // Mix everthing vec4 originalColor = textureLod( inputImage, uv, 0 ); vec4 fillColor = SRGBtoLINEAR( vec4( parameters.fillR, parameters.fillG, parameters.fillB, 1.0 ) ); originalColor = mix( originalColor, fillColor, parameters.fillA * parameters.amount ); originalColor = mix( originalColor, vec4( 1.0 ), rimStrength * parameters.amount ); vec4 edgeColor = SRGBtoLINEAR( vec4( parameters.edgeR, parameters.edgeG, parameters.edgeB, 1.0 ) ); vec4 mixedColor = mix( originalColor, edgeColor, parameters.amount * edge ); imageStore( outputImage, xy, mixedColor ); }
Hide Code

The effect takes the z-texture and uses a sobel filter to detect edges. These edges are weighted by checking the normals in the center area. If they are mostly planar to the neighbors, edges will be discarded.

A lot of other things happen to adjust several settings based on the z-center, where I sample the adjustment curves using the normalized center-z as UV coord and use them for filtering and rendering the edges (width/intensity etc).

And that's it. Thanks for hanging in! This is how CompositorEffects are written for the Rokojori Action Library.

I hope you found it useful for writing own CompositorEffects in Godot. Anyway, don't hesitate for feedback on Discord, BlueSky or Mastodon.











TO ROKOJORI ACTION LIBRARY TUTORIALS







All social media brands are registrated trademarks and belong to their respective owners.





CONTACT IMPRINT TERMS OF USE PRIVACY © ROKOROJI ® 2021 rokojori.com
CONTACT IMPRINT TERMS OF USE PRIVACY © ROKOROJI ® 2021 rokojori.com
We are using cookies on this site. Read more... Wir benutzen Cookies auf dieser Seite. Mehr lesen...