I spoke to the GFX team and was told that not all devices well support 16 bit and therefore it would need to check and convert down depending on the support.
Another possibility is to use texture format PIXELFORMAT_111110F which is WebGL2 only but means you can upload a standard 8bit PNG but just change the texture format.
(this is 16bit normal map in a marmoset toolbag) - you won’t notice there are any glitches except those few errors which I pointed out and those are my fault).
Yes, but in Marmoset Toolbag 16bit is without banding.
It’s the same now because after importing/uploading 16-bit normal map to PlayCanvas it was downgraded to 8-bit by PlayCanvas automatically. That is the problem and that`s why both look the same. 16bit should be without banding.
You need to use 16bit png, convert pixels to 11-11-10 format when loaded, and upload that data to texture. Using standard 8bit PNG won’t give you better precision.
You could likely do the 8888 to 11-11-10 conversion ahead of time and store that in 8888 PNG, but I’m not sure this would be without complications.
I doubt there would be any tools to help with GL_R11F_G11F_B10F format (google for it, that is the Open GL / WebGL format name)
Typically it’s used as a format for render targets, but not for textures. I cannot even guarantee all platforms will allow you to upload data to it at all.
Unless you are prepared to write bunch of code to test it, I’d just go with HalfFloat / Float format which I suggested to @yaustar here:
In this case you’d need to have some png loader library that can load 16bit pngs (maybe even browser can do it? No idea). And then similarly to the the Area lIghts LUT code does, you’d convert those 16 bit values to either 4xHalfFloat / 4xFloat or 8888 format, depending on what platforms supports, and upload it to texture.
It would split 16bit to lower 8bit and higher 8bit. So I would use both of them.
Could you provide me with some info on how to blend two 8-bit normal maps in PlayCanvas?
I’m not sure splitting it would be a good option, as you would not be able to use be-linear texture interpolation to sample those textures in the shader. You’d need to do manual interpolation, or have a strange errors / noise in the normal map. I don’t have a code for any of this handy, I’m sorry, it’s not just few lines or anything easy like that.
use that tool to split 16bit png into 2 8bit pngs.
load those pngs as normal texture assets by PlayCanvas
when loaded, get their pixels, and combine them into a float array (so basically your data would be the same as original 16bit png)
use the already referenced code of Area lights LUT to load this float data into float / half-float / 8bit texture, depending on what is available on the device. On majority of devices you will get 16 or 32 bit per channel
for normal maps specifically, you could also write a splitter that splits 16bit normal map into a single RGBA tga texture for example. Normal map only needs to channels, R and B, the green cam be computed. So you just need to store 16bit R as RG in 8 bit, and 16bit B as BA. So you’d have a single 8bit texture. Then you’d continue from line 3 on my previous post (convert them into a float array)