I’d like to render a second depth map from a different view angle than the main camera. I’ve tried to follow the following example:
https://playcanvas.vercel.app/#/graphics/render-to-texture
However I’m not sure how to convert this to only render depth, instead of rendering the colorbuffer.
I’ve had a look at the engine code which is responsible for shadow maps. Which should be similiar to what I’m trying to do.
Here’s my code so far:
var SensorCamera = pc.createScript('sensorCamera');
SensorCamera.prototype.initialize = function () {
this.setupCamera();
};
SensorCamera.prototype.setupCamera = function (dt) {
const format = pc.PIXELFORMAT_R32F;
const formatName = pc.pixelFormatInfo.get(format)?.name;
// create texture and render target for rendering into, including depth buffer
const texture = new pc.Texture(this.app.graphicsDevice, {
format: format,
width: 1024,
height: 1024,
mipmaps: false,
minFilter: pc.FILTER_LINEAR,
magFilter: pc.FILTER_LINEAR,
addressU: pc.ADDRESS_CLAMP_TO_EDGE,
addressV: pc.ADDRESS_CLAMP_TO_EDGE,
name: `SensorMap2D_${formatName}`
});
this.texture = texture;
const renderTarget = new pc.RenderTarget({
depthBuffer: texture,
flipY: !this.app.graphicsDevice.isWebGPU
});
const worldLayer = this.app.scene.layers.getLayerByName('World');
const skyboxLayer = this.app.scene.layers.getLayerByName('Skybox');
const cameraEntity = new pc.Entity('SensorCamera');
cameraEntity.addComponent('camera', {
layers: [worldLayer.id, skyboxLayer.id],
// toneMapping: pc.TONEMAP_ACES,
// set the priority of textureCamera to lower number than the priority of the main camera (which is at default 0)
// to make it rendered first each frame
priority: -1,
// this camera renders into texture target
renderTarget: renderTarget,
});
this.entity.addChild(cameraEntity);
};
SensorCamera.prototype.update = function (dt) {
const material = new pc.Material();
material.cull = pc.CULLFACE_NONE;
material.setParameter('uSceneDepthMap', this.texture);
material.shader = this.app.scene.immediate.getDepthTextureShader();
material.update();
this.app.drawTexture(0.7, -0.7, 0.5, 0.5, null, material);
};
Currently I’m getting the following error:
Framebuffer creation failed with error code FRAMEBUFFER_INCOMPLETE_ATTACHMENT, render target: SensorMap2D_R32F
Also I’m not sure if I even need a camera component and extra entity for that. It seems like shadowmaps only use a internal camera. However I’m unsure on how I to inject a camera into the render pipeline.
your render target creation seems about right, but you cannot use PIXELFORMAT_R32F as a depth buffer format, you need PIXELFORMAT_DEPTH or similar.
And yes, you need a camera which lets you specify what gets rendered. Your setupCamera function seems ok, it does the right things.
Your update functions creates material each frame, that will give you lots of materials. I guess you do it to see the depth. This will no longer work I think with PIXELFORMAT_DEPTH, as that has limitations.
Material is only used for debugging right now.
What format is PIXELFORMAT_DEPTH? I’d like to visualize it for debugging purposes.
it’s a gpu depth format, used for depth buffer.
It’s tricky to visualize. This example does, but only after we copy the depth buffer to a different format: PlayCanvas Examples
You could capture a frame and see it that way perhaps
maybe easier way would be to follow something like this
https://playcanvas.vercel.app/#/graphics/multi-render-targets
and use color buffer and depth buffer, and override the output.frag (see a tab there) to output depth to color buffer. Note that R32F is not that well supported on older android devices, so might need workaround.
In engine2, you could probably just use engine/src/extras/render-passes/render-pass-prepass.js at main · playcanvas/engine · GitHub - attach it on the camera and let it render linear depth for you.
a lot depends on what you want to do with that depth map too.
Thanks for you detailed response. Sorry for the late reply, was a bit hung up in other projects.
Using PIXELFORMAT_DEPTH gets rid of the error and the camera renders. However as you already stated, I don’t know how to read the depth from that format. I was hoping I can just use getLinearDepth
of the screenDepthPS
shader chunk, but it seems like uSceneDepthMap
always returns the depth of the main camera?
I’ve also tried using the depth texture generated with my second camera instead of uSceneDepthMap
, but this only produces a fully black/white (depending on linearizeDepth/unpackfloat) texture.
I’ve got a few question regarding rendering with 2 cameras:
- Is the
uSceneDepthMap
unique per camera or is it some kind of global texture
- Is the default depth map also rendered with
PIXELFORMAT_DEPTH
? And if so how is the uSceneDepthMap
created from it?
Also here is another clarification of what I am trying to achieve:
- Render additional depth maps from different cameras in the scene
- Use the generated depth maps to create a custom depth shader effect (for now just changing the albedo to green, basically visualizing what the secondary camera sees)
Here is my sample project:
https://playcanvas.com/project/1298114/overview/depth-effect
The examples you posted are interesting. However those are all done with 1 camera and to me it looks like uSceneDepthMap
always contains the depth map of the main camera (or camera which was rendered last?)
What I didn’t try yet is rendering second camera to colorbuffer with depth enabled. And trying to use the depth from it with getLinearDepth
. I guess this should work, but that would mean I render the colorbuffer as well which I don’t need.
If you’re using Engine v2, which I would recommend, one option would be to use RenderPassPrepass on your other camera. This directly renders linear depth into the color buffer, and also uses the depth buffer. Note that the cost of the depth buffer is not large. The memory is allocated and cleared every frame, but we do not load nor write it to memory on tiled devices, so it does not really consume the bandwidth. I used this pass as a base pass for postprocessing that needs depth.
See this example on how to attach render passes to the camera: PlayCanvas Examples
Note that the example attaches two passes, you’d just include the one. Note that RenderPassPrepass is currenty not exported by the engine, so you might need to have your own copy of it (or custom engine build with this class exported from extras/index.js).
I’ll definetly give it a try, however I’m a bit hesitant as the editor for v2 is still in beta and the documentation on render passes seems scarce.
The Editor v2 is coming out of beta any day now by the way.
The docs on render passes are limited - keeping it private for now, while using it, to avoid making changes once they go public. They’re pretty stable, had no changes in a while.
1 Like
actually, thinking about it … in engine v2 this should do it. Just enabled depth rendering on the cameraFrame like this:
const cameraEntity = new pc.Entity();
cameraEntity.addComponent('camera');
const cameraFrame = new pc.CameraFrame(app, cameraEntity.camera);
cameraFrame.rendering.sceneDepthMap = true;
cameraFrame.update();
app.root.addChild(cameraEntity);
and use this color texture:
cameraFrame.renderPassCamera.prePass.linearDepthTexture
1 Like
Works like a charm!
var DepthMapV2 = pc.createScript('depthMapV2');
DepthMapV2.prototype.initialize = function () {
this.setup();
};
DepthMapV2.prototype.setup = function () {
const cameraEntity = new pc.Entity();
cameraEntity.addComponent('camera', {
// priority: -1,
});
const cameraFrame = new pc.CameraFrame(this.app, cameraEntity.camera);
cameraFrame.rendering.sceneDepthMap = true;
cameraFrame.update();
this.entity.addChild(cameraEntity);
this.cameraFrame = cameraFrame;
};
DepthMapV2.prototype.update = function (dt) {
this.app.drawTexture(0.7, -0.7, 0.5, 0.5, this.cameraFrame.renderPassCamera.prePass.linearDepthTexture);
};
I do have a question regarding drawTexture
tho.
When I set the depth camera priority to -1, my main camera will clear the framebuffer but will not render the texture again. It seems like drawTexture
only applies to the current camera/render pass. Is this intended behaviour? I always thought immediate layer is rendered by all cameras/render passes.
EDIT: Seems like this behaviour differs from drawWireSphere
, which renders in both cameras
I think you might be right, but those draw texture calls are for internal debugging only, not a public API. Maybe you can create a layer which you only add to the main camera, and pass that to the drawTexture, that should work around it.
1 Like
For anyone interested I got it working now:
https://playcanvas.com/project/1298114/overview/depth-effect
Current solution:
- Render depth maps with CameraFrame
- Inject generated depth maps into materials
Possible improvements:
- Only render depth (currently also colorbuffer is rendered)
- Use sampler2DShadow for hardware anti-aliasing of depth maps (something similar to shadow map implementation)
I’ve also shortly tested around with RenderPassPrePass but couldn’t get anything working.
1 Like