Depth texture shows pure red when using CameraFrame / RenderPassDepthGrab

Following this older discussion, I’ve been trying to access the scene depth texture (will be used later).

I tested both approaches:

  • RenderPassDepthGrab
  • CameraFrame.rendering.sceneDepthMap = true

However, in both cases, the resulting texture appears pure red when drawn.

Here’s the snippet I used:

// init()
this.entity.camera.requestSceneDepthMap(true);

// update()
const renderPassDepthGrab = this.entity.camera.camera.renderPassDepthGrab;
if (!renderPassDepthGrab) return;

const depthRenderTarget = renderPassDepthGrab.depthRenderTarget;
if (!depthRenderTarget) return;

const depthTexture = this.app.graphicsDevice.isWebGL2
    ? depthRenderTarget.depthBuffer
    : depthRenderTarget.colorBuffer;  // Why colorBuffer here?

console.log('depth texture:', depthTexture);
this.app.drawTexture(0.7, -0.7, 0.5, 0.5, depthTexture);

Is this expected, or is there a correct way to sample it?

Any guidance or confirmation from others who’ve tried this recently would be appreciated!
Project Link: https://playcanvas.com/editor/scene/2346401

A depth texture is quite a specific type of data. It’s important to understand that depth textures come in different formats, and not all of them can be displayed directly on the screen. To visualize such a texture in PlayCanvas, you should use the dedicated function provided in the following link:

this.app.drawDepthTexture(0.7, -0.7, 0.5, 0.5)

Hi @Wagner , thanks for the reply. For my use-case I’m not trying to display the depth (it’s only for debugging). I need it for compute shader. Concretely, I’m looking for a reliable way to obtain a depth resource that I can bind as texture_2d<f32>, ideally linear depth in R32F.

Very few mobile devices support r32f texture, you are better off using rgba8 texture into which you encode float.

GLSL:


    precision highp float;

    #include "floatAsUintPS"

    uniform highp sampler2D uSceneDepthMap;

    #ifdef WEBGPU

        #ifdef SCENE_DEPTHMAP_FLOAT
            #define getDepth(xy, offset, level) texelFetch(uSceneDepthMap, xy + offset, level).r
        #else
            #define getDepth(xy, offset, level) uint2float(texelFetch(uSceneDepthMap, xy + offset, level))
        #endif

    #else

        #ifdef SCENE_DEPTHMAP_FLOAT
            #define getDepth(xy, offset, level) texelFetchOffset(uSceneDepthMap, xy, level, offset).r
        #else
            #define getDepth(xy, offset, level) uint2float(texelFetchOffset(uSceneDepthMap, xy, level, offset))
        #endif
    
    #endif
    precision highp float;

    uniform highp sampler2D uDepthMip;

    void main() {

        ivec2 xy = ivec2(gl_FragCoord.xy);
        float depth = getDepth(uDepthMip, xy, 0);

        #ifdef WRITE_DEPTH
            gl_FragDepth = depth ;
        #elifdef WRITE_FLOAT
            gl_FragColor = vec4(vec3(depth), 1.0);
        #else
            gl_FragColor = float2uint(depth);
        #endif
    }

I’m also working through grabbing the scene depth texture. Did you figure this out?

I feel like the depthtexture you grabbed was actually valid, just not rendering on-screen properly(?)

I’ve just been doing a custom RT render pipeline to get access to the scene depth texture, and then doing a final render pass RT → render to scene, but its a huge performance hit. Personally I’m hoping I can bypass the step of rendering the RT to the scene by grabbing the depth directly from scene somehow

Ended up figuring it out for the most part. Doing a custom / extended render pass using scene depth by using the linked methods (checking for any warnings) for grabbing depth worked great. For some reason RenderPassGrabDepth or whatever it’s called worked great but I couldn’t reference it in the engine, so I just copied the script and hooked it in my render passes.

Turns out grabbing depth was a bit more expensive than what I would have liked anyway (20ms → 28ms frametime on our lower end target) so I moved on from grabbing depth for now.

that’s strange, I have not see cost like that before (on desktop anyways).

This was on a 2014 iPad Air, performance seemed relatively negligible on my desktop as well so I’m sure it’s fine depending on what you want. Maybe bandwidth related?

Otherwise open to suggestions for a more accurate low end device for the current mobile market :slight_smile: