I have a scene with the usual layers and added “Layer1” and “Layer2” on top of it and I have a single camera. What I want to achieve is, to get the camera image after Layer1 but without Layer2 applied and additionally to get the final camera image with Layer1 and Layer2 and then combine those two images in a postprocessing effect. How can I achieve this? I thought since the layers specify the drawing order that I can somehow make a “screenshot” after Layer1 was applied but before Layer2 is drawn… Thanks in advance.
Might be best to have all objects that are on layer 1 also be on layer 2. That way you don’t need to have an image for layer 1 and one for layer 1 + 2. It be one for layer 1, and one for layer 2.
Thanks for the example. I’ll take a look at it. Actually I have my main objects in the World layer. On Layer1 I only have some shadowcasters and then in Layer2 I have another object.
So, when I apply a rendertarget on a layer, will only everything in this specific layer be included? Or will everything up to this layer be included in the texture? E.g. I have the order World-Layer1-Layer2 and I set my rendertarget on Layer1. Will the world objects also be included in this target?
there is another question that occurred while looking at your project. When you set the rendertarget to the layer, you get the texture from the viewpoint of the camera. But you never specified the camera as a source of the rendertarget. So, does it automatically take the camera because only one is in the project? And how would it work with multiple cameras?
The cameras render to the layer. If you have multiple cameras, it depends on the priority of the cameras (plus other settings such as clearing the color buffer, depth etc)