How to Access Depth Buffer?

How can I access the Depth Buffer in a post shader?

I found a line of code in an old post effect that should work scope.resolve(“uDepthBuffer”).setValue(this.depthMap);
But where could I find this info in the docs?

It’s not public API but this should work (although it’s subject to change!):;

Call it in the initialize function of a script on your camera to request a depth map to be generated.

1 Like

This is what the depth buffer looks like?

I would have expected a greyscale image?

It is indeed. Being packed into rgba8 looks funny indeed.

1 Like

OK, meanwhile I found out that I have to unpack the Depth Buffer via

float unpackFloat(vec4 rgbaDepth) { const vec4 bitShift = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0); return dot(rgbaDepth, bitShift); }

I’m trying to make a simple Post Shader to visualize the Depth Pass, but I can’t seem to get it to work. Any help is greatly appreciated!

I pass a far and near variable to the shader but no matter what I set I never see any grayscale in the distance, I can only ever see the object and the background.

I’m guessing the error is somewhere in this line?
f = (depth - uFar) / (uFar - uNear);

Anyway, here goes:

//--------------- POST EFFECT DEFINITION------------------------//
pc.extend(pc, function () {
    // Constructor - Creates an instance of our post effect
    var vizDOFPostEffect = function (graphicsDevice, vs, fs) {
        this.needsDepthBuffer = true;
        // this is the shader definition for our effect
        this.shader = new pc.Shader(graphicsDevice, {
            attributes: {
                aPosition: pc.SEMANTIC_POSITION
            vshader: [
                "attribute vec2 aPosition;",
                "varying vec2 vUv0;",
                "void main(void)",
                "    gl_Position = vec4(aPosition, 0.0, 1.0);",
                "    vUv0 = (aPosition.xy + 1.0) * 0.5;",
            fshader: [
                "precision " + graphicsDevice.precision + " float;",
                "uniform sampler2D uColorBuffer;",
                "uniform sampler2D uDepthBuffer;",
                "uniform float uNear;",
                "uniform float uFar;",
                "varying vec2 vUv0;",
                "float unpackFloat(vec4 rgbaDepth) {",
                "    const vec4 bitShift = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);",
                "    return dot(rgbaDepth, bitShift);",
                "void main() {",     
                "   float f;",
                "   vec4 packedDepth = texture2D(uDepthBuffer, vUv0);",
                "   float depth = unpackFloat(packedDepth);",
                "   f = (depth - uFar) / (uFar - uNear);",
                "   gl_FragColor = vec4(f, f, f, 1);",
        this.uNear = 0;
        this.uFar = 100;

    // Our effect must derive from pc.PostEffect
    vizDOFPostEffect = pc.inherits(vizDOFPostEffect, pc.PostEffect);

    vizDOFPostEffect.prototype = pc.extend(vizDOFPostEffect.prototype, {
        // Every post effect must implement the render method which 
        // sets any parameters that the shader might require and 
        // also renders the effect on the screen
        render: function (inputTarget, outputTarget, rect) {
            var device = this.device;
            var scope = device.scope;

            // Set the input render target to the shader. This is the image rendered from our camera
            // Draw a full screen quad on the output target. In this case the output target is the screen.
            // Drawing a full screen quad will run the shader that we defined above
            pc.drawFullscreenQuad(device, outputTarget, this.vertexBuffer, this.shader, rect);

    return {
        vizDOFPostEffect: vizDOFPostEffect

//--------------- SCRIPT DEFINITION------------------------//
var vizDOFPostEffect = pc.createScript('vizDOFPostEffect');

vizDOFPostEffect.attributes.add('near', {
    type: 'number',
    min: 0,
    max: 256,
    step: 1,
    default: 100

vizDOFPostEffect.attributes.add('far', {
    type: 'number',
    min: 0,
    max: 256,
    step: 1,
    default: 100

// initialize code called once per entity
vizDOFPostEffect.prototype.initialize = function() {
    var effect = new pc.vizDOFPostEffect(;
    // add the effect to the camera's postEffects queue
    var queue =;
    // when the script is enabled add our effect to the camera's postEffects queue
    this.on('enable', function () {
        queue.addEffect(effect, false); 
    // when the script is disabled remove our effect from the camera's postEffects queue
    this.on('disable', function () {
    this.on('attr:near', function (value, prev) {
        effect.near = value;
    this.on('attr:far', function (value, prev) {
        effect.far = value;

Currently screen depth is
(gl_FragCoord.z / gl_FragCoord.w) / camera_far
0 is at near, 1 is at far. Some ranges might look barely distinguishable for the eye, and you might want to to scale/power it in some way to see better.
The formula can change in the future.
Here’s a little example of depth visualization:


really cool! thanks so much!
did you just make this, or is there somewhere I could have founds that example?

Just made it 5 minutes ago :slight_smile:

really cool, so all I had to do was scale depth. nice.

1 Like

@Mr_F what would be the equivalent of your sample to get it working with the new Layers system? To get access to the depthMap from a shader for a given render pass.

Any idea how to access the depthMap of a render pass?

Still unable to figure this out! :slight_smile:

Is your renderTarget.depthBuffer empty?

Trying to grab the depthBuffer of a render pass from a postProcess effect, using the new system.

Indeed the layer.renderTarget property of a given layer, even for the default ones (e.g. World) are empty.

Thinking I might be missing something here, it used to be super easy :slight_smile:"World").onPostRender = (cameraIndex) => {
        // grab them by the buffer

Try that. But i’m almost sure it’s gonna be clear too.

You can grab the colorBuffer by doing that, thanks. Although the depthBuffer is always empty. Tried to override the clear settings for the World layer but … it is still empty.

    var layerWorld ="World");
    layerWorld.overrideClear = true;
    layerWorld.clearDepthBuffer = false;
    layerWorld.onPostRender = (cameraIndex) => {
        console.log(layerWorld.renderTarget.colorBuffer); // Grabs the colorBuffer
        console.log(layerWorld.renderTarget.depthBuffer); // Always empty

Are you sure your renderTarget is using depthBuffer though?..

Also, if you have colorBuffer it doesn’t mean that it’s not empty.
Try to read it’s pixels.

It seems no, this is from the PlayCanvas source code, application.js:

this.defaultLayerWorld = new pc.Layer({
   name: "World",

It doesn’t create a special depth render target, like the defaultLayerDepth does. It renders directly to the screen, which is fine with me, as long as I can grab it :slight_smile:

But the question still remains, how do we get the depth map from the World layer?

The colorBuffer is fine and is passed by default as a uniform on the postEffectPass shader.