My app is loading large textures (4096x2048 pixels) at runtime from a server. I’ve run into a problem on mobile iOS where I’m running out of VRAM and the website crashes and reloads while loading multiple textures. Unloading unused textures and reloading on demand helped somewhat but still there are some situations where it crashes.
Currently the files are jpegs which to my understanding is the problem and I would need to use some other format that doesn’t take up as much VRAM. The format also needs to be supported on both desktops and mobile. Am I correct in assuming basis universal is the way to go? Is it enough to download a .basis file at runtime, create an asset for it and assign it as a texture on a material or is there something else that should be done?
I’ve downloaded and built the command line compression tool from https://github.com/BinomialLLC/basis_universal. Files compressed with it seem to work but I’m not sure if I’m getting the VRAM size benefit as I’ve only tested with single files so far. I’m thinking of writing a script that would use the compression tool to automatically compress jpg files to basis on upload. Also after a few tests I’m still having trouble finding good settings for file size vs quality, the images seem to always be very grainy and have visible banding even at quality level 255.
I’d just like to know if I should proceed further with this or if there’s a better/correct way to achieve what I’m trying to do.
One thing to note when targeting iOS is that your textures will need to be square. Basis is a universal conversion format that will convert the file to the appropriate compression format for each device, but currently the compression that iOS devices use only supports textures that are both power of two and square in size.
“The PVR format only supports textures that have dimensions that are both square (same width and height) and power of two (e.g. 256, 512, 1024 and so on). Older iOS devices (with an A6 SoC or lower like the iPhone 5 and 5C) and older iOS versions (13.7 and lower) only support PVR.”
That gave me the impression this is only a problem for lower-end devices? What happens if the texture isn’t square? I’ve tested on a 2016 iPhone SE with the latest iOS and there it at least loads the texture. But does it get any VRAM size benefit or does it fall back to jpg?
My understanding is that if the compression is not supported, it falls back to the default image format, jpg in this case. However, I could be wrong, so others may wish to comment. You could try testing with a square image and see if you get VRAM usage improvements.
Unfortunately the textures are 360 images mapped on a sphere so I don’t think they can be square. I’m working on a 360 image viewer app.
Also I’m still getting just horrendous quality with the basis compression tool even with highest quality settings both with etc and uastc, max quality settings seem only marginally better than just default etc. How does playcanvas do basis compression to retain image quality? Is something required from the image for the compression to work properly? Or is basis just this bad with gradients?
Edit: I tried compressing an image with playcanvas editor and downloaded that. The result is the same as with the command line tool, terribly grainy and banded gradients. Apparently basis is just no good for large photos / photorealistic 3D renders.
I suspect the only compressed path that will give you usable results is if you compress to basis ASTC and run on a device that also supports ASTC. In this case there is no runtime transcoding and quality shouldn’t suffer.
Unfortunately if the device doesn’t support ASTC, then the image will get transcoded and undergo quality degradation. In that case, you could transcode the images to uncompressed instead of the device supported format, but then of course you have the VRAM issues you started with.
The logic for non-square basis textures is implemented here:
if we’re targeting PVR compression at runtime and the texture isn’t POT and/or square, revert to uncompressed
or if we’re running on a webgl1 device and the texture isn’t POT, revert to uncompressed
In the case where there is no alpha we chose rgb565 format for uncompressed textures. This matches the range of bits stored in ETC1, but actually for a basis texture compressed to ASTC the conversion would result in quality degradation. We might want to make this user-controllable in future.
Thanks for the pointers. For now I’ve opted to using half resolution (2048x1024) images on mobile and that along with the on demand loading/unloading seems to fix the VRAM problem or at least it’s a workaround. 4k images on small mobile screens feels a bit overkill anyway. I may look into basis again later but it doesn’t look too feasible for my use case.
So I’d need to split the 2:1 aspect ratio 360 images and the sphere model in half and then apply the image halves to the sphere halves and hope it won’t create a visible seam or something.
As the banding comes from the basis compression, I guess it’s not possible to edit the images after that? And I think it wouldn’t be that useful as I’d like the compression to happen when any supported image is uploaded to the server and the image to just be available to the app right after that without any additional steps. Just out of curiosity I tried compressing a 48 bit png but the banding problem persists. Adding noise to an image is also an additional step and wouldn’t be that useful either as that would just create a different quality problem.