Pixel overdraw refers to the number of times a pixel is drawn on top of itself. Keeping pixel overdraw as low as possible is important for several reasons:
Performance: Overdraw can greatly slow down the performance of a game or application, as it requires the GPU to do more work.
Battery life: Overdraw can also have a significant impact on battery life, as it requires more power to draw more pixels.
Quality: Overdraw can also lead to visual artifacts, such as flickering or aliasing, which can negatively impact the visual quality of the game or application.
To reduce pixel overdraw, it is important to make sure that objects in the scene are properly sorted, and that objects that are obscured by others are not drawn. Additionally, techniques such as culling and occlusion can be used to further reduce the number of pixels that need to be drawn.
NOTE: reducing overdraw can help only if fragment shaders are your bottleneck. So, always use the Playcanvas profiler for your project before optimizations.
TODO: It appears that it is necessary to consider the issue of baking text into a texture, if it isn’t changed for a long period of time. Because, black means that the pixel has been overdrawn at least 9 times.
Many thanks for sharing, that is super useful to debug performance. Unreal engine offers an easy to use overdraw visual debugger that I used to find so handy.
And here is a test on my vegetation full / prone to overdraw scene (Solar Island). We ended up using real meshes for grass/flowers to avoid exactly that, insane overdraw when using billboards.
So that’s a very interesting approach! Visually there is almost no change on my test scene, it works as expected, just a slight change on the grass at a distance.
Still I think on my scene I get the best performance with the default preset Material/Mesh on the World layer.
I will make sure to let you know if I test some more, many thanks for sharing this!
Ofc. It depends on CPU/GPU consumption. So, Material/Mesh is CPU-friendly, Front-To-Back is GPU-friendly. Probably, CPU is a bottleneck for your scene.
Hm. Can we use the occlusion queries (WebGL2) from previous frames for a sorting strategy? The sorting could be performed by considering the fragment count for each MeshInstance, incorporating randomness to account for any variations due to draw order. The heuristic in this case can also improve depth pre-pass for occluding objects. I’m not sure that it works.