Multi Camera Color Grabpass

Hello, I’ve been trying to get a distortion effect working by using the grab pass example.
It seems to work on for anything rendered by the main scene camera, but I am unable to distort anything rendered by a second layer. If I add the distortion renderer layer to the second camera, it seems to discard the other element being rendered by that camera. In this example, the plane is on the main cam, and the sphere on the second.

Here’s the example project:
https://playcanvas.com/project/1064680/overview/multicamcolorgrabpass

I’ve been shuffling camera& material properties, layers, and sort orders around while troubleshooting… so it may be a setup issue. For the shader, I’m really just overriding basePS and endPS with the grab pass related fragment code. I see mention of a CameraComponent.requestSceneColorMap on the web, but not sure if that applies to this situation.

Thanks for your time!

Is this what you are looking for? https://playcanvas.com/project/1067089/overview/f-multicamcolorgrabpass

For reference, in the example there is this comment:

// Depth layer is where the framebuffer is copied to a texture to be used in the following layers.
// Move the depth layer to take place after World and Skydome layers, to capture both of them.
const depthLayer = app.scene.layers.getLayerById(pc.LAYERID_DEPTH);
app.scene.layers.remove(depthLayer);
app.scene.layers.insertOpaque(depthLayer, 2);

So I’m assuming that when the depth layer is rendered, that’s when it captures the color buffer too?

Is this what you are looking for? PlayCanvas 3D HTML5 Game Engine

Hey, I’m having trouble spotting the distortion on the sphere. Is it being affected? My goal is to have both the plane and sphere distorted by the shader.

So I’m assuming that when the depth layer is rendered, that’s when it captures the color buffer too?

That’s what it sounds like to me, but when I tried inserting the depthLayer code, it seemed to make the distortion render black. (I think I left it commented out in my example).

I did some more messing around with it, and am getting better results with the following:

  • setting the depthLayer further down ( in my case7)
  • setting the layers on the second cam to as shown
  • keeping the distortion mesh on it’s own distortion layer.

I’m not totally sure these are all the variables… but it’s promising!

I’m not sure you need two cameras? One should be enough, to avoid the cost of double copying the framebuffer.

You render all normal meshes first (in World Layer). Then grab what is rendered by adding a Depth layer to be rendered after the World layer. And then you need another layer that renders after depth, which uses the grabbed texture and renders on top. If that mesh is transparent, you can use World layer for this one too, just set up the transparent part of it to render after the Depth layer.

Ah interesting, thanks for the follow up! I think the actual project is using the second camera to setup a totally independent sorting layer of meshes in that scene, but I will poke at it and see if it’s really needed.

It sounds like I can still apply this feedback though :clap: … I’ve been using an opaque material in a “distortion” layer, but maybe I can piggy back on an existing layer as long as it’s after depth (and its contents don’t need distorting?).

If I may ask: How does this manual layer setup you describe relate to the scripted approach above (layers.insertOpaque()) ? Are they doing the same thing, or am I confusing two different functions? I am still wrapping my head around this system… so feel free to tell me to rtfm :stuck_out_tongue:

in the engine code you use layers.insertOpaque() while in the Editor code you can do that too, but typically use Settings → Layers to do this using UI.

It’s also handy to install Spector JS (Chrome plugin for example), capture a frame and inspect what takes place … you can see when layers render, when the grab pass takes place and similar.

3 Likes