Yes it’s a bit confusing, I will attempt to explain.
Maya and Blender and OpenGL consider the uv origin (0,0) to be bottom left of the texture. (This also means the first byte in GPU texture memory is the bottom left of the image).
However WebGL and glTF decided instead to specify uv origin (0,0) to be top left (see for example the glTF spec and WebGL spec - “The first pixel transferred from the source to the WebGL implementation corresponds to the upper left corner of the source”).
This means some conversion is necessary between Maya/Blender/OpenGL and WebGL/glTF.
There are two options to accomplish this:
- keep model texture coordinates as they are and flip all textures vertically
- transform model texture coordinates using:
v = 1.0 - v and keep textures as they are
The issue with flipping textures at load time is that this is not possible when dealing with compressed textures. Compressed textures must be flipped at compression time instead.
Therefore the standard approach taken by the industry is to keep textures naturally orientated and invert texture coordinates instead. All glTF converters and viewers expect this.
In v1.45.0 of the engine we made a number of changes in this area and now store texture coordinates as native glTF and we store textures naturally oriented. (The engine was flipping glTF texture coordinates -and- images at load time before).
The engine change means we can now more easily handle things like vertex quantisation and texture coordinate transforms correctly and will simplify the pipeline going forward. However as you’ve seen, it can impact things like custom chunks.
I hope that makes some sense Let us know if there’s anything else we can do to help!