Rendering a secondary depth map

I’d like to render a second depth map from a different view angle than the main camera. I’ve tried to follow the following example:
https://playcanvas.vercel.app/#/graphics/render-to-texture

However I’m not sure how to convert this to only render depth, instead of rendering the colorbuffer.

I’ve had a look at the engine code which is responsible for shadow maps. Which should be similiar to what I’m trying to do.
Here’s my code so far:

var SensorCamera = pc.createScript('sensorCamera');

SensorCamera.prototype.initialize = function () {
    this.setupCamera();
};

SensorCamera.prototype.setupCamera = function (dt) {

    const format = pc.PIXELFORMAT_R32F;
    const formatName = pc.pixelFormatInfo.get(format)?.name;

    // create texture and render target for rendering into, including depth buffer
    const texture = new pc.Texture(this.app.graphicsDevice, {
        format: format,
        width: 1024,
        height: 1024,
        mipmaps: false,
        minFilter: pc.FILTER_LINEAR,
        magFilter: pc.FILTER_LINEAR,
        addressU: pc.ADDRESS_CLAMP_TO_EDGE,
        addressV: pc.ADDRESS_CLAMP_TO_EDGE,
        name: `SensorMap2D_${formatName}`
    });

    this.texture = texture;

    const renderTarget = new pc.RenderTarget({
        depthBuffer: texture,
        flipY: !this.app.graphicsDevice.isWebGPU
    });

    const worldLayer = this.app.scene.layers.getLayerByName('World');
    const skyboxLayer = this.app.scene.layers.getLayerByName('Skybox');

    const cameraEntity = new pc.Entity('SensorCamera');
    cameraEntity.addComponent('camera', {
        layers: [worldLayer.id, skyboxLayer.id],
        // toneMapping: pc.TONEMAP_ACES,

        // set the priority of textureCamera to lower number than the priority of the main camera (which is at default 0)
        // to make it rendered first each frame
        priority: -1,

        // this camera renders into texture target
        renderTarget: renderTarget,

    });
    this.entity.addChild(cameraEntity);
};

SensorCamera.prototype.update = function (dt) {
    const material = new pc.Material();
    material.cull = pc.CULLFACE_NONE;
    material.setParameter('uSceneDepthMap', this.texture);
    material.shader = this.app.scene.immediate.getDepthTextureShader();
    material.update();
    this.app.drawTexture(0.7, -0.7, 0.5, 0.5, null, material);
};

Currently I’m getting the following error:
Framebuffer creation failed with error code FRAMEBUFFER_INCOMPLETE_ATTACHMENT, render target: SensorMap2D_R32F

Also I’m not sure if I even need a camera component and extra entity for that. It seems like shadowmaps only use a internal camera. However I’m unsure on how I to inject a camera into the render pipeline.

your render target creation seems about right, but you cannot use PIXELFORMAT_R32F as a depth buffer format, you need PIXELFORMAT_DEPTH or similar.

And yes, you need a camera which lets you specify what gets rendered. Your setupCamera function seems ok, it does the right things.

Your update functions creates material each frame, that will give you lots of materials. I guess you do it to see the depth. This will no longer work I think with PIXELFORMAT_DEPTH, as that has limitations.

Material is only used for debugging right now.

What format is PIXELFORMAT_DEPTH? I’d like to visualize it for debugging purposes.

it’s a gpu depth format, used for depth buffer.
It’s tricky to visualize. This example does, but only after we copy the depth buffer to a different format: PlayCanvas Examples

You could capture a frame and see it that way perhaps

maybe easier way would be to follow something like this
https://playcanvas.vercel.app/#/graphics/multi-render-targets

and use color buffer and depth buffer, and override the output.frag (see a tab there) to output depth to color buffer. Note that R32F is not that well supported on older android devices, so might need workaround.

In engine2, you could probably just use engine/src/extras/render-passes/render-pass-prepass.js at main · playcanvas/engine · GitHub - attach it on the camera and let it render linear depth for you.

a lot depends on what you want to do with that depth map too.