[SOLVED] UI Element custom shader

Hi there,

I was adding some custom shaders to the UI elements, and noticed that some are not rendering in Orthographic camera mode. I figured that if there is some object that the camera is looking at, then the UI element from the 2D screen gets overwritten with it. I suppose it has something to do with the depth writing? However, changing the depthWrite attribute didn’t help. It also doesn’t look like blend type is the cause. I’ve made a small project to show my problem:

Both UI elements are in the same 2D Screen. However, the blue one on the left is a normal one, while the one on the right has a custom shader (simple red color). You can see, that the red element is cut off by the plane in the background.

Hi @LeXXik,

Not sure why it isn’t working in Orthographic camera mode but in general Element entities under a 2D screen have a special shader pipeline used. Using a new pc.Shader here most likely breaks the program that gets generated internally by the Playcanvas engine and allows you to render the Element as part of a 2D screen in the UI Layer.

I’d say that you are better off exploring the Playcanvas shader chunks as a starting point in writing custom shaders. It’s very powerful and allows you to write shaders that take advantage of many features the standard Playcanvas materials provide: text effects, fonts (and more for non Element entities like PBR lighting, shadowing, tonemapping etc).

1 Like

Thank you, @Leonidas

I suppose I will have to drop custom shaders for now. In the meantime I will try to understand how the chunks work.

Is it actually a bug or a ferature? Should I create a bug report?

There isn’t a fixed pipeline for rendering in WebGL like there is in OpenGL. Everything is rendered using shaders.

Overwritting part or completely the default shader most likely breaks several things, so it’s expected to get undesired behavior. Unless you implement in your shader the missing things (camera projection, view matrix, vertex position etc). To sum up, I don’t think this is a bug, mostly an advanced feature :slight_smile:

1 Like


I went on and decided to get my hands dirty with the chunks :slight_smile:
However, I am a bit stuck over the updateUniforms:

I guess this is the place, where a standard material shader gets a white default color through the ‘material_diffuse’ parameter, passed to diffusePS chunk. Am I correct? If yes, where are these uniforms are coming from, e.g. this.diffuseUniform or this.ambientUniform? I can’t seem to find their definitions.

So, the Standard Material uses respectively the Standard Program to generate the resulting shader program that gets compiled and pushed to the rendering pipeline. Here is how that program works:

So at any point you can change one of the shader chunks and update the material to generate a new shader, for example (taken from the Warp a Sprite with GLSL example):

    m.chunks.emissivePS = 
        "uniform sampler2D texture_emissiveMap;\n" +
        "\n" +
        "vec3 getEmission() {\n" +
        "    vec2 uv = $UV;\n" +
        "    uv.y += sin((uv.x + time) / wavelength) * amplitude;\n" +
        "    return $texture2DSAMPLE(texture_emissiveMap, uv).$CH;\n" +

This will force the material to update and since a new emissive pixel shader chunk exists it will generate a new shader using your custom code. Here you can define new uniforms directly in GLSL and those are picked up by the engine. You can update their value later using the setParameter method.

All uniforms that were collected from a shader are pushed to the device scope here:

1 Like

This is great! Thank you for pointing in the right direction, @Leonidas :slight_smile:

Alright, after some study, trials, errors, and victories, I conclude that the chunks is indeed a powerful feature :slight_smile:
However, it does not fit my specific purpose, as it affects the whole 2D screen space, since all my other elements are affected by that chunk. It is probably better to use for special effects? Like post-processing the final frame, for example.

As a result, I went back to the new pc.Shader() method and figured out what the problem was. The issue was the Z depth of the fragment. In our case, gl_Position.xy holds the XY coords of the fragment, while gl_Position.z holds the depths info. Since -1.0 corresponds to close pane and 1.0 far plane in OpenGL, all I had to do was to set it to close pane in the vertex shader:

void main(void)
    gl_Position = matrix_model * vec4(aPosition, 1.0);
    gl_Position[2] = -1.0; // right here

And here is the updated example, for reference:

Edit: typo, not aPostion, but gl_Position


Good work figuring this out and thanks for sharing @LeXXik!

Shaders chunks overwrite a part of the generated program per material. So if you properly clone/apply a material to only a handful of models, the updated shader will affect only those.

But for the 2D screen you might have a point, that material might be internally generated to be applied to all 2D screens.

1 Like