GLTF Model Not Rendering to Color Buffer for Depth Picking – Only Planes Work, Not Imported Model

Hi PlayCanvas community,

I’m implementing a measurement tool using a custom color buffer/depth picking approach. My setup works for simple primitives (like planes), but not for imported GLTF models. Here’s what’s happening:

  • I have a ColorBufferPicker class that renders selected targets with a custom depth shader to an offscreen color buffer, then reads the pixel under the mouse to decode depth and compute a world position.
  • My targets array contains three entities: two planes (which work fine) and one imported GLTF model (which does not render to the color buffer for picking).
  • When I try to pick on the GLTF model, the alpha channel of the pixel is always zero, so picking fails.
// In MeasurementPresenter.ts
this.targets = this.collectRenderTargets([this.plane, this.plane2, this.modelObject]); // modelObject is the GLTF root

// In ColorBufferPicker.ts
getWorldPos(event, camera, targets, range) {
    // ... set up render target and shader ...
    targets.forEach(target => {
        if (target.render) {
            target.render.material = this.shader;
        }
    });
    camera.camera.renderTarget = this.renderTarget;
    this.app.render();
    // ... read pixel, decode depth ...
}```

What I’ve checked:

The GLTF model loads and displays correctly in the main scene.
collectRenderTargets finds all entities with a render component, including the GLTF model’s meshes.
The planes work perfectly for picking, but the GLTF model never writes to the color buffer (alpha is always 0).
Questions:

Why does the GLTF model not render to my custom color buffer for depth picking, while planes do?
Is there something special about PlayCanvas GLTF-imported entities or their materials that prevents them from being replaced by my custom shader/material?
What is the correct way to ensure all meshes of a GLTF model render to a custom render target with a custom shader for depth/color picking?
Are there best practices for overriding materials on imported GLTF meshes for offscreen rendering in PlayCanvas?

here is my colorbuffer script:

```js
import * as pc from 'playcanvas';

export class ColorBufferPicker {
    private app: pc.Application;
    private canvas: HTMLCanvasElement;
    private colorBuffer: pc.Texture;
    private renderTarget: pc.RenderTarget;
    private shader: pc.ShaderMaterial;

    constructor(app: pc.Application, canvas: HTMLCanvasElement) {
        this.app = app;
        this.canvas = canvas;

        // Create color buffer texture for depth encoding
        this.colorBuffer = new pc.Texture(app.graphicsDevice, {
            width: app.graphicsDevice.width,
            height: app.graphicsDevice.height,
            format: pc.PIXELFORMAT_R8_G8_B8_A8,
            mipmaps: false
        });

        // Create render target
        this.renderTarget = new pc.RenderTarget({
            colorBuffer: this.colorBuffer,
            depth: true
        });

        // Create depth shader
        this.shader = this.createDepthShader();
    }

    private createDepthShader(): pc.ShaderMaterial {
        return new pc.ShaderMaterial({
            uniqueName: 'depthPickingShader',
            attributes: { aPosition: pc.SEMANTIC_POSITION },
            vertexGLSL: `
                attribute vec3 aPosition;
                uniform mat4 matrix_model;
                uniform mat4 matrix_viewProjection;
                uniform mat4 matrix_view;
                uniform float uNearClip;
                uniform float uFarClip;
                varying float vNormalizedDepth;
                
                void main(void) {
                    vec4 worldPosition = matrix_model * vec4(aPosition, 1.0);
                    vec4 viewPosition = matrix_view * worldPosition;
                    gl_Position = matrix_viewProjection * worldPosition;
                    
                    // Linear depth in view space (positive)
                    float linearDepth = -viewPosition.z;
                    
                    // Normalize depth to 0-1 range for better encoding
                    vNormalizedDepth = (linearDepth - uNearClip) / (uFarClip - uNearClip);
                }
            `,
            fragmentGLSL: `
                precision highp float;
                varying float vNormalizedDepth;
                
                // Improved float to RGBA encoding
                vec4 float2vec4(float value) {
                    value = clamp(value, 0.0, 1.0);
                    const vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
                    const vec4 bitMsk = vec4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
                    vec4 res = fract(value * bitSh);
                    res -= res.xxyz * bitMsk;
                    return res;
                }
                
                void main(void) {
                    gl_FragColor = float2vec4(vNormalizedDepth);
                }
            `
        });
    }

    getWorldPos(event: pc.MouseEvent, camera: pc.Entity, targets: pc.Entity[], range?: number): pc.Vec3 | null {
        if (!camera.camera || !targets || targets.length === 0) return null;

        // Store original materials and render target
        const originalMaterials = targets.map(target => target.render?.material || null);
        const origRT = camera.camera.renderTarget;

        try {
            // Set shader uniforms for depth range
            const nearClip = camera.camera.nearClip;
            const farClip = range || camera.camera.farClip;
            
            this.shader.setParameter('uNearClip', nearClip);
            this.shader.setParameter('uFarClip', farClip);
            
            // Apply depth shader to all targets
            targets.forEach(target => {
                if (target.render) {
                    target.render.material = this.shader;
                }
            });
            
            camera.camera.renderTarget = this.renderTarget;
            this.app.render();

            // Calculate pixel coordinates once
            const rect = this.canvas.getBoundingClientRect();
            const pixelX = Math.floor((event.x - rect.left) * (this.colorBuffer.width / this.canvas.clientWidth));
            const pixelY = Math.floor((this.canvas.clientHeight - (event.y - rect.top)) * (this.colorBuffer.height / this.canvas.clientHeight));
            
            // Read pixel data
            const gl = (this.app.graphicsDevice as any).gl;
            gl.bindFramebuffer(gl.FRAMEBUFFER, this.renderTarget.impl._glFrameBuffer);
            
            const pixel = new Uint8Array(4);
            gl.readPixels(pixelX, pixelY, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixel);
            gl.bindFramebuffer(gl.FRAMEBUFFER, null);

            if (pixel[3] === 0) return null;

            // Decode and convert to world position
            const normalizedDepth = this.vec4ToFloat(pixel);
            const linearDepth = nearClip + normalizedDepth * (farClip - nearClip);
            
            return this.depthToWorldPosition(event, camera, linearDepth);

        } finally {
            // Restore original materials for all targets
            targets.forEach((target, index) => {
                if (target.render && originalMaterials[index]) {
                    target.render.material = originalMaterials[index];
                }
            });
            camera.camera.renderTarget = origRT;
        }
    }

    private vec4ToFloat(pixel: Uint8Array): number {
        // Optimized RGBA to float decoding
        const r = pixel[0] * (1.0 / (255.0 * 256.0 * 256.0 * 256.0));
        const g = pixel[1] * (1.0 / (255.0 * 256.0 * 256.0));
        const b = pixel[2] * (1.0 / (255.0 * 256.0));
        const a = pixel[3] * (1.0 / 255.0);
        
        return r + g + b + a;
    }

    private depthToWorldPosition(event: pc.MouseEvent, camera: pc.Entity, depth: number): pc.Vec3 {
        const cam = camera.camera!;
        
        // Calculate NDC coordinates
        const rect = this.canvas.getBoundingClientRect();
        const ndcX = ((event.x - rect.left) / this.canvas.clientWidth) * 2 - 1;
        const ndcY = -(((event.y - rect.top) / this.canvas.clientHeight) * 2 - 1);
        
        // Calculate view space position
        const aspect = this.canvas.clientWidth / this.canvas.clientHeight;
        const tanHalfFov = Math.tan((cam.fov * Math.PI / 180) * 0.5);
        
        const viewX = ndcX * depth * tanHalfFov * aspect;
        const viewY = ndcY * depth * tanHalfFov;
        const viewPos = new pc.Vec3(viewX, viewY, -depth);
        
        // Transform to world space
        const invViewMatrix = new pc.Mat4().copy(cam.viewMatrix).invert();
        const worldPos = new pc.Vec3();
        invViewMatrix.transformPoint(viewPos, worldPos);
        
        return worldPos;
    }
}

Hi @Daniyal_Shahid,

I was curious about this block of code. You mentioned the model was rendering correctly, but I just wanted to make sure, are you assigning you shader to the RenderComponent itself, or the actual meshInstances held within?

I already have tried all method but all are failed. I also applied this as well.


entity.render.meshInstances.forEach(meshInstance => {
    meshInstance.material = myMaterial;
});

// Or for a single meshInstance:
entity.render.meshInstances[0].material = myMaterial;

I assign shaders to the meshInstances within the RenderComponent, not to the RenderComponent itself.