Accessing Depth Buffer in shader without chunking

I see the fog example which uses chunks,
I’m fairly new to writing shaders and prefer to stick to my workflow of using full vertex/fragment shaders. Is there a simple way to access the depth buffer this way?
Any help is much appreciated!

On github I saw December 2nd: “added a shared chunk to graphics, which gets included in all fragment shaders, for global built-in functions. Currently contains a function to generate texture coordinates for sampling from grab pass, and this handles the upside down case for WebGPU.”
Not sure what this function is and I suppose there aren’t any examples for it yet. I’ve tried directly sampling uSceneDepthMap, no luck unfortunately.

Hi @Slush,

You can use the depth buffer in custom shaders, I don’t have an example right now in my mind. But you can try including somewhere at the top of your custom fragment shader the following (it’s a string):

pc.shaderChunks.screenDepthPS

And also make sure to ask for depth, I think enabling Depth Grapbass at your active camera will do that:

image

Now you can use the regular shader chunk methods to request for depth:

float depth = getLinearScreenDepth();

Let me know how that works for you.

This engine only examples shows you a custom vertex / fragment example to access the depth:
https://playcanvas.github.io/#/graphics/ground-fog

That should be pretty easy to use in the Editor as well.

1 Like

This is what I originally tried, but this depth value always reads 1.

Additionally I get a one-time error Shader [Shader Id 5 shader] requires texture sampler [uSceneDepthMap] which has not been set, while rendering [Pass:RenderAction 0-0 Cam: Camera | Camera | World | NurbsPath.020] which I’m assuming is only an issue on the first frame. Interestingly, this error does not get thrown when using BLEND_NORMAL, but I would like to use BLEND_NONE on my material.

I’ve tried messing around with using the camera_params.x as shown in the fog example with no success. When copying the fog example exactly, I’m able to get fragmentDepth to read properly, but not sceneDepth (fragmentDepth isn’t of much use to me). I’ve also tried setting camera.requestSceneDepthMap(true) directly, no luck.

It should be noted in the fog example, trying to use getGrabScreenPos() results in 'getGrabScreenPos' : no matching overloaded function found and there is no definition of getGrabScreenPos in the github repo for screenDepthPS, though I don’t believe this is the source of this error.

One more issue, when I try to include pc.shaderChunks.screenDepthPS in my vertex shader I get the error, ERROR: 0:28: 'gl_FragCoord' : undeclared identifier. I can get around this by copying the code into my vertex shader from GitHub and deleting the fragment-only function, but thought it was worth mentioning as if you try to copy the code from the example it throws a number of errors. It would be nice if there was an updated and simpler example for accessing the depth buffer @mvaligursky .

Enable grab depth pass on the camera:

Yes, I have Depth Grabpass enabled on the camera, and I’ve tried both with and without Clear Depth Buffer enabled.

Are you using multiple cameras? If not, you shouldn’t get that error :thinking:

Only one camera in my scene, but I have the shader on multiple materials around the scene. Oddly, the error does not get thrown when I use BLEND_NORMAL, but the depth buffer still appears to only read 1. Additionally, I would like to use BLEND_NONE if possible.

It seems like you’re using shader that depend on the depth, on a layer that renders before the Depth layer. You can only use it in the layer after the depth.

In general, you have layers like this:

World
Depth
Skydome
World Transparent

the depth is grabbed from the scene’s depth buffer inside the Depth layer. And so it can be used only in the layers that render after it.

2 Likes

Also, how do you create the shader?

Ah this may be the issue. I’ll try changing the render layer and let you know if I have any success.

I create it using:

var shaderDefinition = {
        attributes: {
            aPosition: pc.SEMANTIC_POSITION,
            aUv0: pc.SEMANTIC_TEXCOORD0,
            aNormal: pc.SEMANTIC_NORMAL,
        },
        vshader: vertexShader,
        fshader: fragmentShader
    };

    // Create the shader from the definition
    this.shader = new pc.Shader(gd, shaderDefinition);

Where vertexShader and fragmentShader are my GLSL with
pc.shaderChunks.screenDepthPS appended to the front of them.

ideally you’d use something like this:

            const vertex = `#define VERTEXSHADER\n` + pc.shaderChunks.screenDepthPS + files['shader.vert'];
            const fragment = pc.shaderChunks.screenDepthPS + files['shader.frag'];
            const shader = pc.createShaderFromCode(app.graphicsDevice, vertex, fragment, 'GroundFogShader');

as that handles some platform differences, for example GL1 vs GL2.

The depth chunks you’re including require GL2 set on WebGL2, which this handles. You might need to add ‘#define GL2’ otherwise, before including that chunk.

As always, grab a frame using Spector JS and inspect. Compare your email vs the ground fog one too to see the data / shaders.

also there is this, but not much else that was not said already, but good to mention
https://developer.playcanvas.com/en/user-manual/graphics/cameras/depth-layer/

I got this working, changing the render layer was the issue. Thank you everyone for all your help, I really appreciate it.

2 Likes

I’m currently having issues implementing the ground fog shader. Sometimes it shows up, sometimes it doesn’t. It doesn’t show up much more often than it does.

Update: fixed by calling this.camera.requestSceneDepthMap(true); on my camera

1 Like