WebGL Texture Arrays

Was the subject of webgl texture arrays ever tackled in the context of Playcanvas? They allow to bypass the 16 texture limit for shaders which is crucial for terrain painting.
Perhaps there’s a clever way to use them even without proper engine support?

Calling @mvaligursky, he may know more on the matter.

There is no support for texture arrays at the moment, as the focus was for Webgl1 & 2 compatibility across most features. This will change as we’re working on WebGPU renderer, but that is still a while off.

I’m not sure if there’s some way to use texture arrays without adding some engine support - you could add a callback on rendering of your mesh instance, and use webgl API directly, but that would be complicated to get right I think.

If you don’t mind me asking, which callback are you referring to and what would be the most problematic about it? Because if doesn’t seem like the engine support is nearby I might still try to figure this out on my own.

this one

it gets called after all webgl things are set up for the mesh instance … at this point you could set up your things most likely.

When you directly set something on the webgl context, make sure to invalidate our shadow state of that, so that on the next render call we force set it. Otherwise we’d believe some other texture is set up (and not the one you overwrite), and so following render calls might not work correctly. See WebglGraphicsDevice on how this works.

1 Like


Looks like I managed to make them work after all. I left this drawCallback solution as a last resort and fortunately it wasn’t needed. Maybe it would’ve been cleaner but that’s only if you don’t like monkey-patching ^^.
From my venture into the depths of the engine it seems like some basic support for texture arrays in the engine would be quite simple to implement.
I’m sharing here what I did if anyone wants to do the same.

First the definition of a new type

pc.app.graphicsDevice;.pcUniformType[device.gl.SAMPLER_2D_ARRAY] = 50;

Engine constants end at about 35 so there’s a bit room still.

Then, the ugliest part. Patching WebglShader.postLink method.

const shader = this.boilerplateMaterial.resource.shader;
const glShaderPrototype = shader.impl.__proto__;
glShaderPrototype._prevPostLink ??= glShaderPrototype.postLink;

glShaderPrototype.postLink = function(device, shader) {
    const res = this._prevPostLink(device, shader);
    for (let i = this.uniforms.length - 1; i > 0; i--) {
        const shaderInput = this.uniforms[i];
        if (shaderInput.dataType === 50) {
            this.uniforms.splice(i, 1);
    return res;

This whole thing is extremely easy to avoid. The only reason for this awkward patching is to add one more entry here:

if (info.type === gl.SAMPLER_2D || info.type === gl.SAMPLER_CUBE ||
   (device.webgl2 && (info.type === gl.SAMPLER_2D_SHADOW ||
   info.type === gl.SAMPLER_CUBE_SHADOW || info.type === gl.SAMPLER_3D))

so that our shaderInput doesn’t land with the uniforms but instead, with the samplers.

At some point we create a regular pcTexture. Test data can be created just like for Three.js

const textureArrayOptions = {
    format: pc.PIXELFORMAT_R8_G8_B8_A8,
    width: width,
    height: height,
    depth: depth, // number of textures
    magFilter: pc.FILTER_NEAREST,
    minFilter: pc.FILTER_NEAREST,
    mipmaps: false,
    addressU: pc.ADDRESS_CLAMP_TO_EDGE,
    addressV: pc.ADDRESS_CLAMP_TO_EDGE,

this.tex = new pc.Texture(this.app.graphicsDevice, textureArrayOptions);
this.tex._levels[0] = data; // this is to avoid additional patching
this.material.setParameter('myTexArray', this.tex);

This part is surprisingly almost the same as with other textures for this simple example.
The last serious patching is for texture uploading but from the perspective of the engine there’s almost nothing being done here - I just copy pasted and cut the relevant parts but same story here - just one additional case was added.

TextureArray.prototype.monkeyPatchTexture = function(tex) {
    const device = pc.app.graphicsDevice;

    tex.impl._prevInitialize ??= tex.impl.initialize;
    tex.impl.initialize = function(device, texture) {
        this._prevInitialize(device, texture);
        this._glTarget = device.gl.TEXTURE_2D_ARRAY;

    tex.impl.upload = function(device, texture) {
        const gl = device.gl;

        if (!texture._needsUpload && ((texture._needsMipmapsUpload && texture._mipmapsUploaded) || !texture.pot))

        let mipLevel = 0;
        let mipObject;
        let resMult;

        const requiredMipLevels = Math.log2(Math.max(texture._width, texture._height)) + 1;

        while (texture._levels[mipLevel] || mipLevel === 0) {
            if (!texture._needsUpload && mipLevel === 0) {
            } else if (mipLevel && (!texture._needsMipmapsUpload || !texture._mipmaps)) {

            mipObject = texture._levels[mipLevel];

            if (mipLevel === 1 && !texture._compressed && texture._levels.length < requiredMipLevels) {
                texture._mipmapsUploaded = true;

            resMult = 1 / Math.pow(2, mipLevel);
                Math.max(texture._width * resMult, 1),
                Math.max(texture._height * resMult, 1),

            texture._mipmapsUploaded = mipLevel !== 0;

        if (texture._needsUpload) {
            texture._levelsUpdated[0] = false;

        if (!texture._compressed && texture._mipmaps && texture._needsMipmapsUpload && (texture.pot || device.webgl2) && texture._levels.length === 1) {
            texture._mipmapsUploaded = true;

        if (texture._gpuSize) {
            texture.adjustVramSizeTracking(device._vram, -texture._gpuSize);

        texture._gpuSize = texture.gpuSize;
        texture.adjustVramSizeTracking(device._vram, texture._gpuSize);

So yeah, this way I don’t need to do any housekeeping myself since the engine handles it almost out of the box.


nicely done @redka
I’ve created a ticket to implement it on our side as well