renderTarget by layer?

Before last update it was possible to set the render target to the camera.
Now it seems to be moved to a single layer.
How is supposed to work?
If I want to render the whole camera (with multiple layers) to the same render target what should I do?



renderTarget by layer seems to make a lot less sense than renderTarget by camera

You can render layer to a renderTarget, apply post-effect for only this layer and then draw it.

Sure, but that you could achieve with a relatively harmless workarround of simply using an additional camera.

On the other hand, the workarrounds required for usecases that ask for a renderTarget by camera (of which I claim there are a lot more of, such as mirrors/reflections, in-game video screens and so on) now require to create a new layer, assign this layer to all objects in the scene, create a new camera that only renders this layer and so on.

I can see that there are use-cases that might prefere a renderTarget per layer over a renderTarget per camera, but there are fewer of those and they can be easily achieved by an additional camera, whereas the opposite workaround is a lot uglier.

1 Like

For anyone who wants to reverse the change and have their renderTargets per camera rather than per layer, here’s a script from hell to do that:

    if (typeof String.prototype.parseFunction != 'function') {
        String.prototype.parseFunction = function () {
            var funcReg = /function *\(([^()]*)\)[ \n\t]*{(.*)}/gmi;
            var match = funcReg.exec(this.replace(/\n/g, ' '));

            if(match) {
                return new Function(match[1].split(','), match[2]);

            return null;

    if (typeof String.prototype.replaceAll != 'function') {
        String.prototype.replaceAll = function(search, replacement) {
            var target = this;
            return target.replace(new RegExp(search, 'g'), replacement);

    function patchFunction(func) {
        return func.toString().replaceAll('layer.renderTarget', 'camera.renderTarget').parseFunction();

    pc.ForwardRenderer.prototype.renderComposition = patchFunction(pc.ForwardRenderer.prototype.renderComposition);

here’s a script from hell

Jeeez, it’s for real!

I think you miss the point of layers. It’s a nice thing, personally with new post effects, which currently are not released and documented, but in engine since the December.

1 Like

I don’t think I miss the point, I can see that there are merrits in certain usecases, but nothing that you couldn’t simply achieve with renderTargets per camera by adding an additional camera.
Now with the current setup, if you want to have a camera render a second view that you project onto a texture (think security cam), or if you want to implement a real-time mirror or reflective water, how will you do this with the current setup without having to create an extra layer and adding this layer to all your objects?

I will:

1.Render the whole damn world in custom render target.
2.Copy that to another one by pc.CopyRenderTarget
3. Draw the first render target.
4. Draw water with shader and passed copy of world’s render target.
5. Take a coffee

1 Like

but for that you need to setup a custom layer, right?

No. I already have a world layer.

But yes, probably I have to specify a special one for water.

Also, I recently made a nice heat air refraction effect with that.

That’s great but I bet you could do that just as well with an additional camera.

The main point is that with additional layers per renderTarget you are forced to iterate over all your entities and add that layer to them, as well as remember whenever you create a new entity that you also add this layer to it, otherwise it won’t be rendered in the reflection. With the renderTarget per camera you don’t have this limitation and still are able to do all the custom effects you want (and also assign additional layers if you want so, just you don’t have to) - the only annoyance there is that you have to duplicate a camera, which is a smaller annoyance.

What? No.
You have a world layer, depth layer, UI layer.
When you add entity, it get to world by default or UI, if it has element component.

You can disable layer as well, and then you won’t render it.

And layers system is abstraction. It’s good to work with abstraction.

Right, but you need to render your world twice in each frame - one time from a mirrored camera into a render texture and then a second time from the actual camera into the backbuffer, using the rendertexture as reflection texture. How do you do that without additional layers?

What’s problem of additional layers? You can manipulate it easily.

So with layers you don’t have to render your scene twice. You can render your mirror after the world.

1 Like

Of course you still have to render twice. The disadvantage of the additional layer is that I have to assign it to each and every of my objects

No! You don’t have to!
By default it always be a World layer!
I already explained you why.

You can render the whole world except mirror into a rendertarget and then use it when you render a water or whatever. You don’t have to render scene twice.

1 Like

Alright so how do you render the world layer both into a rendertarget (to generate the reflection texture) as well as into the backbuffer (to render your actual scene) within the same frame?

For real-time reflections you always have to render twice

I would render it into a renderTarget.
Then copy from it to backbuffer by pc.copyRenderTarget.

1 Like

That A. Is expensive (and unnecessary) and B still won‘t give you realtime-mirror reflections (the only thing you could do that way would be screen-space reflections which have limited applicability) because for realtime mirror reflections you have to render your scene twice from different perspectives!

Or let‘s take a realtime reflection-probe that renders the scene each frame into a cube-map for realtime reflections on curved surfaces: With the current per-layer rendertarget setup you‘ll have to create 6 new layers and assign all of your scene objects these layer id‘s in order to achieve this

Whaaaat? With WebGl 2 it just resolve it (which is extremely fast). With webGL 1 it is appropriate as well.

Sure, realtime reflections is better with twice rendered scene.

But with layers you can render only objects, that should reflect.
You can create layer “reflective” and the second camera there. Then, you can add meshes which should reflect and render only this layer.

It’s still faster than render scene twice.

1 Like