[SOLVED] Layers and affect of the 'Order'

The ‘Layers’ area of the engine+editor may appear quite simple to most developers, but I am not sure I fully understand the inner workings 100% (do get that ‘Transparency’ is needed to execute ‘opacity effect’, and for some months ago, I got the ‘assignment of layers to objects’-routine).

But I am still not fully into ‘how the layers’ (as set in the editor/settings) inflict on each other, in regards to settings-order.
For instance - here is how I have set the various layers in the editor in my project → will two consecutive ‘Transparency’ layers affect the render (and the same in regards to ‘two consecutive Opaque Layers’)?

Edit and relevant subquestion: Is it possible to log out ‘active layers’ and their state?

1 Like

Additionally: Here is the setup in a videoclip, where the dissappearing effect is active above a certain distance but not when getting closer (?)

As a very important side-note; I did try to change the ‘Far clip’ setting, which doesn’t seem to be the issue
image

In general, you’d want to set up all opaque layers to render together first, and then render all transparent layers. You do not want to have them interleaved as you do.

2 Likes

Ok, appreciate the respons/feedback @mvaligursky (will try it also)

… still; closing this myself as I found a hack-to-the-issue.
I scaled down everything by a factor of 10, which worked - and in my mind, this implies that there are some mayor bugs in regarding the ‘camera-pipeline-renderer intanglement’ (or how one best describes it otherwise).

@mvaligursky): cf this closed issue as well at github level?(Rendering seem to fail on external GLB files (works at first load, but not second) · Issue #4268 · playcanvas/engine · GitHub) …

this issue you’re facing might be related to camera near / far distances maybe?
Or perhaps there are some transparent objects that render to the depth buffer and cover other objects.

ok … but as stated above; “did try to change the ‘Far clip’ setting” //thx anyway, but I still hope that ‘our’ developers will revisit these kinds of rendering issues on a more fundamental-code level

1 Like

This is my understanding of how this all works and Martin may correct me in areas

The cameras render to a render target which is usually the backbuffer or a texture which is set on per camera basis.

The layers listed on camera are rendered to the render target in the order that listed in rendering settings of the project

If multiple cameras are rendering to the same render target, then the cameras rendering order is on a per camera basis via the priority on the camera.

ie Camera A with priority 0 would render all it’s listed layers onto the render target first, then Camera B with priority 1 would render all it’s listed layers onto the render target on top of the result from Camera A

This is also known as Camera Stacking.

In terms of render order of the layers, they render in the order that are listed in the settings as shown above. There are a number of things to keep in mind here.

Each layer is split between an opaque and transparent sublayer which is usually dependent on the material that is used. ie. if it uses opacity, it will be on the transparent sublayer.

Meshes are rendered per sublayer in the order that layer sorting logic as seen in the settings

The general recommend practice for order is to have all the opaque sublayers together and then have all the transparent sublayers.

This because before rendering the pixels to the render target, it will test on a per pixel basis whether it should render a mesh to that pixel by testing against the Z/depth buffer (unless depth testing is disabled on the mesh/material).

The depth buffer represents the closest distance to the camera that something has rendered to that pixel. (Learn more about Z/depth buffer here on this crash course: 3D Graphics: Crash Course Computer Science #27 - YouTube)

With transparent meshes, you want to ‘see’ through them to show what is behind them, the objects behind HAVE to be rendered first. If the transparent mesh is rendered first and that is closest to the camera, then any mesh behind won’t render because when it tests against the depth buffer on those pixels, it sees that something closer has already been rendered and therefore, doesn’t render that mesh to the render target for those pixels.

And you get the following effect:

Instead of the correct ordering and rendering:

This is why it’s so hard to get rendering order correct with complex transparent meshes. It’s VERY difficult to get the order correct for every angle, especially if they interlink and recommend to break down the mesh so that it’s easier to sort and potentially ‘fix’ via layers.

The also added gotcha is that a layer can clear the depth buffer which has to be set in code. The exception to this is the UI layer where it clears the depth buffer because it renders the screen space UI elements on top of everything that has been rendered before.

The gotcha here is that any layer AFTER the UI sublayer will render on top of what has already been rendered.

The recommended practice to have any layers that render in 3D space to be BEFORE the UI sublayer.

There was mention of changing/modifying clip values on the camera.

Bear in mind that the depth buffer only has a certain number of bits of precision (see how float numbers are represented for precision). The bigger the range between near and far clips, the lower the precision you get for the distance from the camera which can lead to Z fighting (Z-fighting - Wikipedia) if two or more polys are close together in world space.

The recommend practise is to keep the range as small as possible for the clip values and also take into consideration the distance between polys and meshes in the scene.

Example of having too large a clip range:


And example of having a suitable range:


4 Likes

Great guide - can be useful from here on forward