I was adding some custom shaders to the UI elements, and noticed that some are not rendering in Orthographic camera mode. I figured that if there is some object that the camera is looking at, then the UI element from the 2D screen gets overwritten with it. I suppose it has something to do with the depth writing? However, changing the depthWrite attribute didn’t help. It also doesn’t look like blend type is the cause. I’ve made a small project to show my problem: https://playcanvas.com/project/663943/overview/temp-1
Both UI elements are in the same 2D Screen. However, the blue one on the left is a normal one, while the one on the right has a custom shader (simple red color). You can see, that the red element is cut off by the plane in the background.
Not sure why it isn’t working in Orthographic camera mode but in general Element entities under a 2D screen have a special shader pipeline used. Using a new pc.Shader here most likely breaks the program that gets generated internally by the Playcanvas engine and allows you to render the Element as part of a 2D screen in the UI Layer.
I’d say that you are better off exploring the Playcanvas shader chunks as a starting point in writing custom shaders. It’s very powerful and allows you to write shaders that take advantage of many features the standard Playcanvas materials provide: text effects, fonts (and more for non Element entities like PBR lighting, shadowing, tonemapping etc).
There isn’t a fixed pipeline for rendering in WebGL like there is in OpenGL. Everything is rendered using shaders.
Overwritting part or completely the default shader most likely breaks several things, so it’s expected to get undesired behavior. Unless you implement in your shader the missing things (camera projection, view matrix, vertex position etc). To sum up, I don’t think this is a bug, mostly an advanced feature
I went on and decided to get my hands dirty with the chunks
However, I am a bit stuck over the updateUniforms:
I guess this is the place, where a standard material shader gets a white default color through the ‘material_diffuse’ parameter, passed to diffusePS chunk. Am I correct? If yes, where are these uniforms are coming from, e.g. this.diffuseUniform or this.ambientUniform? I can’t seem to find their definitions.
This will force the material to update and since a new emissive pixel shader chunk exists it will generate a new shader using your custom code. Here you can define new uniforms directly in GLSL and those are picked up by the engine. You can update their value later using the setParameter method.
All uniforms that were collected from a shader are pushed to the device scope here:
Alright, after some study, trials, errors, and victories, I conclude that the chunks is indeed a powerful feature
However, it does not fit my specific purpose, as it affects the whole 2D screen space, since all my other elements are affected by that chunk. It is probably better to use for special effects? Like post-processing the final frame, for example.
As a result, I went back to the new pc.Shader() method and figured out what the problem was. The issue was the Z depth of the fragment. In our case, gl_Position.xy holds the XY coords of the fragment, while gl_Position.z holds the depths info. Since -1.0 corresponds to close pane and 1.0 far plane in OpenGL, all I had to do was to set it to close pane in the vertex shader:
gl_Position = matrix_model * vec4(aPosition, 1.0);
gl_Position = -1.0; // right here