Optimising texture usage

We are currently developing a project that has many small discrete models, each with it’s own diffuse, light map and combined specular/metallic maps.

We are considering combining the light maps into 1 single large texture, mainly due to limiting the number of concurrent network requests, but is there any rendering efficient gained from doing this? ie. will Playcanvas minimise the number of GL state changes between draw calls if textures are shared between models? Are the any additional pro’s/cons of doing this? We’re just evaluating whether it’s worth the additional overhead of work.

Thanks!

Hi @Mark_Lundin,

You will definitely get lower memory usage which is quite important on mobile. Shader lookups for textures I think will be the same since it won’t matter if it’s a single buffer being looked up or multiple.

The rendering draw calls can potentially be reduced if your models happen to share the same material, so they can be batched by Playcanvas to drastically reduce the draw calls. Check this thread it contains some useful info on the subject:

1 Like

So personally, I would combine lots of lightmaps into one big one, yes. This is going to mean you have fewer materials, so fewer draw calls and state changes. This is of much higher importance than memory (it’s unlikely you’ll run out of memory due to vertex count, whereas excessive draw calls can kill performance on CPU).

Hey @mvaligursky - do you have any thoughts on this (including batching insights)?

This is what I was thinking, but good to have your insight. However, my understanding was that two materials compiled to the same shader if both used the same parameters, but regardless if those params were equal. So if two material used only diffuse and light map textures, then those would compile to the same shader regardless of whether the actual textures they used were the identical.

Further to this though, does the renderer sorts draw calls of a given material/shader if they share the same texture? So if 5 models share the same material (compile to the same shader) and also share the same light map, do they get grouped together so that each draw call doesn’t need to bind and unbind the same identical texture?

You should definitely combine lightmaps into large atlas. It might help you batch some meshes that use the same materials.And also small wins as you mentioned where the engine won’t have to bind the identical texture and also shader as that’d be the same. Engine also does sorting by many of the mesh / material properties (a sorting key is generated based on these), so you get extra efficiency from this.

But to get the best performance, I would consider putting your other textures on larger atlases. Consider the scenario: You have 100 objects, each has unique smaller diffuse texture, and they share the lightmap. This will end up being 100 render calls (per camera). You could create a diffuse texture atlases, say an atlas could fit 5x5 textures, so 25 in total. If you enable batching in the engine, it could combine all meshes sharing diffuse and lightmap atlases, and you would end up with 4 render calls (this assumes they all use the same material, so other properties need to match too). Disadvantage here is that if you only need 1 objects from a diffuse atlas, you still have to load whole atlas with 25 textures in it. If that is likely scenario, you could give up using compression on that diffuse atlas, and build your atlas at runtime from small textures as you load them.

1 Like

That’s great @mvaligursky, good to know. I was unsure how marginal the performance gains were from switching textures, but it makes sense that if we will already be combining lightmaps, the combining other textures will give most gains by reducing draw calls. There’s obviously a bit of a trade off in the artist pipeline, so we’d need to look into this a bit more

Just following up on this @mvaligursky. Do you know if the batching process is non-blocking on a worker?

The batching is on the main thread, but it’s not expensive - it just transforms vertices of batched meshes into world space and stores them a new vertex buffer, and copies indices as well. This is pretty fast in general.

1 Like

You can easily measure the time it takes to batch something by watching the timer available in the profiler:

image

1 Like

I did see that @Leonidas it’s in ms right? I hope it is :laughing:

1 Like

Heh, yes, they are ms!

phew, yeah that makes things a lot better then. We we’re having a big hang on load and I thought it was the batching, but 5ms is a lot better :sweat_smile: It’s probably the shaders compiling. Speaking of which I noticed in the Seemore project you have a script that pre-compiles all the shaders. I’m not really sure how that works in practice, is it just bypassing the process of determining what shaders are needed? Will it speed things up if we get a reference to the latest shaders?

Good question, @will may be able to offer insight on how that is used on the Seemore demo.

Shader compilation is a thread blocking process, and yes if you have a lot of unique shaders on your project it can take a while to have them compiled on startup. You can check on the profiler how many unique materials/shaders you have:

You can’t do much about it to be honest, at least until the async shader compilation WebGL extension becomes widely used and supported. Best advice is to group your materials to share the same shader where possible:

https://developer.playcanvas.com/en/user-manual/optimization/guidelines/

If you profile your application startup, you should be able to see if it’s the shader compilation or not. If it is, there’s this undocumented function you can use. Basically run the project through your typical scene, so that it internally builds all shader variants that are used. Then you dump the list of variants into the .js file and include it in your project, and on the next run all of them should be precompiled. Profile and see if that makes a difference for you.

5 Likes