I have a general question of which path to take in regards to optimization. We’re building a 3D World not very unlike this one: www.simsmining.eu (which is also made in playcanvas). But this time it’s larger and will have close to 50 unique little mining machines of sorts, together with many more details.
I wonder if there are there any performance or optimization tips that you have right of the bat, and also I wonder specifically about the strategy to take when colorizing these vehicles. I’m thinking out loud, just these two paths to take and wonder if you’d recommend any of them - or a third one - based on performance wins/problems?
The all share pretty much three colors, so one way would be to divide their meshes up in three groups that have three materials assigned to them - yellow, grey and black. Would that create 3 draw calls per machine? Then that same yellow material could be assigned to all the yellow parts of all machines etc.
Or, they could all have 1 unique material per machine - which in turn has a diffuse texture where these colors are defined. 1 draw call per machine + the additional overhead of a texture?
I also saw this technique on a model I bought of Cgtrader.com, that all models share on big rainbox texture 2048px. then in order to change the color of some detail one would drag those UV faces to the right coordinatet. Wicked. Smart? Have no clue. Still need non-overlapping UV’s if to do any lightmapping work.
All coloring/texturing strategies are valid and the resulting performance can be improved, if required, using a number of techniques (grouping/instancing/batching). To that end I’d say first check what art creation pipeline gives you the freedom to reach you end result and in a way that you can easily maintain it in the future.
Regarding the strategies you propose, some comments (though I have to say the differences are usually minimal, it depends on the number of objects / size of assets at the end, selecting one over the other):
This is a valid strategy, with a very small memory allocation footprint (important for mobile). Potentially it can lead to more draw calls, since you will have a mesh instance per color, but using batching (easily available in Playcanvas) you can mitigate that.
Texture atlases are great to reduce the number of draw calls, though if you have one for each vehicle/prop potentially can lead to large memory allocations. Especially if all or most of your content is always visible on screen.
This is a great strategy when using simpler color (e.g. low poly games) with simple or no gradients. It provides the best of both worlds: few draw calls / small memory footprint. But it can be a hard to the artist to maintain, but at the end it depends on the content.
And yes, batching supports atlasing / keeping your UV mapping intact.
From my point of view, 1 and 3 are the best if you combine them with dynamic batching in playcanvas. In case 1 you will have 3 render calls, in case 3 you will have 1 render call. That makes almost no difference. Option 3 would be in general slightly faster, but possibly more painful to work with for artists.
Option 2 is the most flexible, as artists can paint texture with more details than just single colors (bake in ambient occlusion), but you would end up having a single draw call per vehicle type.
A variation of Option 2 would work well - you could create a mega texture that contains textures for many / all vehicles, lets say 4x4 grid, so 16 vehicles. Then playcanvas would batch all of these 16 types together. This would be the option I’d pick most likely - best possible quality and great performance.
I had a very quick look at that other project you linked (you can inspect a lot with https://spector.babylonjs.com/ (install it as a plugin to the browser and capture a frame). It seems their vehicles are skinned (maybe not all), and so those will not get batched at the moment and will be a separate render call.
But in theory they could be done without skinning … as they don’t “bend”, they’re only separate parts connected to a separate node transform. That implementation would batch.
Yes, there is definitely a threshold that you will have to find depending on your target devices.
For polygons, when not limited by draw calls or pixel shaders, you can easily run even million of polygons on desktop on a high end GPU. But on mobile and integrated GPUs you can see the frame rate drop after 200-300K (even that can be a lot on some devices).
Memory allocation is a bit harder to track down and find the exact limit. On desktop you will rarely hit a limit since, even with integrated GPUs, the OS allocates a lot of memory as VRAM. And it doesn’t have any performance hit like in frame rate: as long as you have memory available to allocate your app will run. When you run out of memory your app will crash.
On mobile though memory is usually limited, especially on older devices, and the browser tab running your app is competing with memory with any other app open. The OS can potentially decide that your app is consuming too much memory and kill the tab.
On iPhone 5/6 which feature only 1GB of RAM this can happen very often with WebGL sites that use a lot of images. That 1GB of RAM has to be split between all the apps the user has open at the time and your 3D site. That means that you may have to go as low as 50-100MB of total memory allocated if you are planning in supporting those devices.
iPhone 6s has 2GB of RAM and in newer devices it’s getting even easier (using iPhone analogy since it’s easier to name models).
Your best friend here is:
Texture compression to lower your memory footprint, Playcanvas makes it super easy to enable this.
Resource management, unload any textures/meshes that aren’t visible. If you have a resource heavy app this is a must to implement. By default Playcanvas will load everything that is preloaded or rendered at least once automatically to VRAM. But it’s up to you to implement a strategy of unloading and reloading that content back in place.
That’s normal, GPU compressed textures are usually occupy more disk space (though they compress quite nicely with gzip compression for faster downloads). But as soon as they get uploaded in VRAM that’s where the magic kicks in.
For your iPhone issue, I think that compressed PVR textures haven’t been generated yet. Even though you have that selected the size on the right column is missing (you see only a dash). That means that a valid PVR (the iOS extension used) hasn’t been generated.
One significant limitation of using the PVR format on iOS is that textures must be square, and each dimension must be a power of two.