As you know, tiling textures within a texture atlas can be a pretty tricky thing with mipmapping/linear filtering (seams and such…), though there are ways to handle it with shader tricks to get it to look right.
One way is you could always pad your textures with extra pixels within the atlas. But in this case, I decided to adopt no padding for textures in order to squeeze in as much textures as possible in.
There are several shader scheme builds to test performance, with certain minimum hardware requirements:
https://playcanv.as/p/3QlSEgHF/ Main build. No hardware requirements. SHould work on mobile as well. Bakes the Mipmaps directly into the texture atlas itself and samples directly from within the same atlas, with no mipmap filtering from the engine itself, only linear filtering. This means each texture within the atlas takes up double space though, 1 space for regular map and the other space for mipmaps. (Weird, why are there artifacts in mobile?)
https://playcanv.as/b/LtTWO0MD/ No hardware requirements. Uses lod bias offset hack with
texture2Dwith a lot more
dFdx/dFdycalls. Works on mobile as well. Runs with both mipmap filtering and linear filtering from the engine itself, so mipmaps don’t need to be baked into the texture atlas itself.
https://playcanv.as/b/EwzWF0Xp/ Requires the (WebGL 2.0?? https://developer.mozilla.org/en-US/docs/Web/API/EXT_shader_texture_lod )
texture2DLodExtmethod to manually select the lod mipmap level itself. Doesn’t seem to work on mobile though. Runs with both mipmap filtering and linear filtering from the engine itself, so mipmaps don’t need to be baked into the texture atlas itself.
The current tiling texturing scheme doesn’t require you to prepare specific blend combinations of textures per atlas tile. Instead, there are already pre-defined RGB blend tile variations used in conjunection with plain textures.
And the tile bitmap lookup with encoded pixels to determine which RGB+Blend tiles (and blend rotation transforms) to choose.
Managed to pack four 4-bit texture refererences and transforms into a single 24 bit pixel.
Red channel - [UPPER(Left upper 2 bits u+ Right lower 2 bits v)Red sample] + [LOWER(Left upper 2 bits u. Right lower 2 bits v)Green sample]
Green channel - [UPPER(Left upper 2 bits u + Right lower 2 bits v)Blue sample] + [LOWER(Left upper 2 bits u. Right lower 2 bits v.)Blend sample]
Blue channel - Contains uv transform matrix a, b, c, d values in 2 bits each from left (upper) to right (lower) for Blend sample
THis approach however is rather limited to 16 tile textures max per atlas…assuming i store all R/G/B reference textures + blend texture reference in a single 24 bit pixel for the tile bitmap lookup (alpha channel at full 1.0 unused, unless i resort to another channel map for that…since semi-transparent pixels may negatively affect rgb values in some platforms it seems). If using shader scheme 2 or 3, since mipmaps don’t need to be baked into the texture atlas, i could afford double the amount of textures, but even then i’d need to rethink the encoding scheme to better optimize for that situation to afford more than 16 tile textures per atlas. (eg. 1 texture UV reference on RedU/GreenV channel and uv transforms in Blue channel)