Taking a screenshot with PostEffects on the camera

I have an app that takes a screenshot through a given camera, and that camera has the standard SSAO script applied. I’m using the method given in the screenshot sample, basically assigning a custom render texture to camera.renderTarget. The snag is, the output image does not have SSAO applied.

After some digging, I found that the postEffects queue swaps out the camera’s render target for the input texture of the first postEffect in the stack. I also discovered that there is a destinationRenderTarget on the postEffects stack that seemingly contains the final output of the effects stack.

This seems to work, the resulting texture has SSAO applied properly. However, it is also upside down and stretched horizontally, and I can’t figure out why or what to do about it. This is basically what I’ve come up with:

    this.camHasPostEffects = this.captureCam.camera.postEffects.enabled;

    this.renderTarget = new pc.RenderTarget({
        colorBuffer: colorBuffer,
        depth: true,
        stencil: true,
        flipY: !this.camHasPostEffects,
        samples: pc.app.graphicsDevice.maxSamples
    });

    if (this.camHasPostEffects) {
        this.oldRT = this.captureCam.camera.postEffects.destinationRenderTarget;
        this.captureCam.camera.postEffects.destinationRenderTarget = this.renderTarget;
    }
    else {
        this.oldRT = this.captureCam.camera.renderTarget;
        this.captureCam.camera.renderTarget = this.renderTarget;
    }

Then after capturing, I flip the image using the context (because flipY is ignored on destinationRenderTarget for some reason)

if (this.camHasPostEffects) {
    this.context.scale(1, -1);
}

Here is my result with SSAO disabled:

And here is my result with SSAO enabled:

In general this feels extremely hacky and I can only assume that I’m not supposed to be touching destinationRenderTarget.

Is there a better way to capture a screenshot with postEffects attached? Is this a bug? Is there a workaround?

Hi @Quincy,

So, I forked the Capturing a screenshot project, added the ssao.js script from the engine repo and attached it to both the main and the screenshot cameras.

Taking a screenshot seems to be working fine. Let me know if I misunderstood what you are trying to do.

2 Likes

Yeah I just forked that project and confirmed that it works as expected…

Not sure exactly what’s going on, I’m guessing it’s because my project is based on an old version of that code.

1 Like

I think I figured out what’s going on.

After copying the screenshot code and into my project and editing it as needed I was still not getting postEffects rendering. I copied back my code to the demo project to ensure there were no weird scene changes and it still wasn’t working.

The only change I really made was how the camera is managed; instead of having a second camera and rigging up the capture camera on initialize, I’m using my main camera and changing the renderTarget 1 frame before the image is taken. Turns out, setting camera.renderTarget after the camera is already enabled blows away the postEffects stack and it is never reapplied. After looking at the code on github, I determined that the postEffects stack is typically constructed when the camera is enabled or an effect is added/removed.

My workaround is to disable the camera entity before setting camera.renderTarget, and reenable it afterwards. It’s not the most elegant solution and I suspect there will be complications, but it works.

3 Likes

For your reference @mvaligursky

So this works, however I’m now noticing that I’m getting a pretty severe downgrade in image quality in my final screenshot when the SSAO script is enabled.

Here is a comparison of a shot with SSAO enabled (left) and SSAO disabled. I’ve zoomed in on the same part of each image to show the issue better.

I think this is happening due to differences between the intermediary render textures in the effects stack and the final output texture I’ve provided. One difference I’ve spotted is that the samples appears to be set to graphicsDevice.samples (4) rather than the value set on my destination RenderTarget (8). Also, they appear to be different resolutions.

Here is my destination target:
image

And here is the SSAO input target:

This appears to match my current screen size and I’m pretty sure this discrepancy explains the horrible quality reduction. I don’t think this behavior is ever really desirable, the generated postEffects RenderTargets should be made to match the RenderTarget quality settings as closely as possible.

I wrote a little workaround that (mostly) works.

image

The resolution issues are mostly gone, but now it is too “sharp”. This is samples = 8, I also tried 1 and 4 with varying results.

Also though, I’m now getting my SSAO output rendered as an offset shadow, seemingly because the SSAO script sets up a render target on initialize sized to the graphics device. I’m not exactly sure how this gets resized when the window changes.

I tried simply resizing the SSAO target via this code

cameraEntity.script.ssao.effect.target.colorBuffer._width = texWidth;
cameraEntity.script.ssao.effect.target.colorBuffer._height = texHeight;

but that seems to be ignored in the output image


(SSAO darkened for visibility)

And then I get a permanent shadow in-app after I’ve taken the screenshot

I’m not really sure what to do about this. It seems that reliance on the graphicsDevice size makes it extremely difficult to capture a screenshot at a different resolution.

Another attempt;

Basically just resizing the SSAO script in the assign function in posteffects-ssao.js if we’re rendering to a destination. This does not fix the problem.

I’m pretty out of my depth here.

@mvaligursky Any advice you can give here?

It sounds like this is the issue you’re getting:

The problem is that post effects use framebuffer’s sizes, and so don’t allow rendering to smaller / larger texture. This is something we need to fix on the engine side, I’m not sure if there’s any workaround you can do.

Could you temporally increase the device pixel ratio for the frames needed for the screenshot and change it back when the screenshot is done?

I managed to technically fix the resolution crushing issue by adding this code to the top of my Screenshot code to dynamically inject “fixed” functions

pc.PostEffectQueue.prototype.targetWidth = function() {
    let rt = this.camera.renderTarget;
    if (rt !== null && rt !== undefined) {
        if (rt.isOffscreenTarget) {
            rt = this.destinationRenderTarget;
        }
    }

    if (rt !== null) {
        return rt.width;
    }
    else {
        return this.app.graphicsDevice.width;
    }
};

pc.PostEffectQueue.prototype.targetHeight = function() {
    let rt = this.camera.renderTarget;
    if (rt !== null && rt !== undefined) {
        if (rt.isOffscreenTarget) {
            rt = this.destinationRenderTarget;
        }
    }
    
    if (rt !== null) {
        return rt.height;
    }
    else {
        return this.app.graphicsDevice.height;
    }
};

pc.PostEffectQueue.prototype._allocateColorBuffer = function(format, name) {
    console.log('Allocating ColorBuffer: '+name);

    const rect = this.camera.rect;
    const width = Math.floor(rect.z * this.targetWidth() * this.renderTargetScale);
    const height = Math.floor(rect.w * this.targetHeight() * this.renderTargetScale);

    const colorBuffer = new pc.Texture(this.app.graphicsDevice, {
        name: name,
        format: format,
        width: width,
        height: height,
        mipmaps: false,
        minFilter: pc.FILTER_NEAREST,
        magFilter: pc.FILTER_NEAREST,
        addressU: pc.ADDRESS_CLAMP_TO_EDGE,
        addressV: pc.ADDRESS_CLAMP_TO_EDGE
    });

    return colorBuffer;
};

pc.PostEffectQueue.prototype._createOffscreenTarget = function(useDepth, hdr) {

    const device = this.app.graphicsDevice;
    const format = hdr ? device.getHdrFormat() : pc.PIXELFORMAT_R8_G8_B8_A8;
    const name = this.camera.entity.name + '-posteffect-' + this.effects.length;

    const colorBuffer = this._allocateColorBuffer(format, name);

    const useStencil =  this.app.graphicsDevice.supportsStencil;
    const samples = useDepth ? device.samples : 1;

    let rt = new pc.RenderTarget({
        colorBuffer: colorBuffer,
        depth: useDepth,
        stencil: useStencil,
        samples: samples
    });
    rt.isOffscreenTarget = true;
    return rt;
};

Basically all this does is check for an existing renderTarget and use that size if one is found. Otherwise we default to the device size.

The issue I’m running into now is that the SSAO script draws the added shadows way too large on the screenshot:

I also made these changes to the posteffect-ssao.js script from github:

    // // Render targets
    // let width = graphicsDevice.width;
    // let height = graphicsDevice.height;

    // // var width = graphicsDevice.width;
    // // var height = graphicsDevice.height;
    // var colorBuffer = new pc.Texture(graphicsDevice, {
    //     format: pc.PIXELFORMAT_R8_G8_B8_A8,
    //     minFilter: pc.FILTER_LINEAR,
    //     magFilter: pc.FILTER_LINEAR,
    //     addressU: pc.ADDRESS_CLAMP_TO_EDGE,
    //     addressV: pc.ADDRESS_CLAMP_TO_EDGE,
    //     width: width,
    //     height: height,
    //     mipmaps: false
    // });
    // colorBuffer.name = 'ssao';
    // this.target = new pc.RenderTarget({
    //     colorBuffer: colorBuffer,
    //     depth: false
    // });

    // Uniforms
    this.radius = 4;
    this.brightness = 0;
    this.samples = 20;
}


SSAOEffect.prototype = Object.create(pc.PostEffect.prototype);
SSAOEffect.prototype.constructor = SSAOEffect;

Object.assign(SSAOEffect.prototype, {
    resizeRenderTarget: function(inputTarget) {
        if (this.target === undefined || (this.target.width !== inputTarget.width || this.target.height !== inputTarget.height)) {
            // Render targets
            

            // var width = graphicsDevice.width;
            // var height = graphicsDevice.height;
            var colorBuffer = new pc.Texture(pc.app.graphicsDevice, {
                format: pc.PIXELFORMAT_R8_G8_B8_A8,
                minFilter: pc.FILTER_LINEAR,
                magFilter: pc.FILTER_LINEAR,
                addressU: pc.ADDRESS_CLAMP_TO_EDGE,
                addressV: pc.ADDRESS_CLAMP_TO_EDGE,
                width: inputTarget.width,
                height: inputTarget.height,
                mipmaps: false
            });
            colorBuffer.name = 'ssao';
            this.target = new pc.RenderTarget({
                colorBuffer: colorBuffer,
                depth: false
            });
        }
    },
    render: function (inputTarget, outputTarget, rect) {
        var device = this.device;
        var scope = device.scope;

        this.resizeRenderTarget(inputTarget);
        let width = this.target.width;
        let height = this.target.height;

        var sampleCount = this.samples;
        var spiralTurns = 10.0;
        var step = (1.0 / (sampleCount - 0.5)) * spiralTurns * 2.0 * 3.141;

        var radius = this.radius;
        var bias = 0.001;
        var peak = 0.1 * radius;
        var intensity = (peak * 2.0 * 3.141) * 0.125;
        var projectionScale = 0.1 * height;
        var cameraFarClip = this.ssaoScript.entity.camera.farClip;

        scope.resolve("uAspect").setValue(width / height);
        scope.resolve("uResolution").setValue([width, height, 1.0 / width, 1.0 / height]);
        scope.resolve("uColorBuffer").setValue(inputTarget.colorBuffer);
        scope.resolve("uBrightness").setValue(this.brightness);

        scope.resolve("uInvFarPlane").setValue(1.0 / cameraFarClip);
        scope.resolve("uSampleCount").setValue([sampleCount, 1.0 / sampleCount]);
        scope.resolve("uSpiralTurns").setValue(spiralTurns);
        scope.resolve("uAngleIncCosSin").setValue([Math.cos(step), Math.sin(step)]);
        scope.resolve("uMaxLevel").setValue(0.0);
        scope.resolve("uInvRadiusSquared").setValue(1.0 / (radius * radius));
        scope.resolve("uMinHorizonAngleSineSquared").setValue(0.0);
        scope.resolve("uBias").setValue(bias);
        scope.resolve("uPeak2").setValue(peak * peak);
        scope.resolve("uIntensity").setValue(intensity);
        scope.resolve("uPower").setValue(1.0);
        scope.resolve("uProjectionScaleRadius").setValue(projectionScale * radius);

        pc.drawFullscreenQuad(device, this.target, this.vertexBuffer, this.ssaoShader, rect);

        scope.resolve("uSSAOBuffer").setValue(this.target.colorBuffer);

        // scope.resolve("uFarPlaneOverEdgeDistance").setValue(cameraFarClip / bilateralThreshold);
        scope.resolve("uFarPlaneOverEdgeDistance").setValue(1);

        scope.resolve("uBilatSampleCount").setValue(4);

        pc.drawFullscreenQuad(device, outputTarget, this.vertexBuffer, this.blurShader, rect);
    }
});

Instead of creating the colorBuffer when the shader is created, I now create a new one anytime the input buffer changes size and use that size instead of device size when setting the scope parameters. In theory this should work, from what I can tell I’m just drawing the quad wrong somehow.

1 Like

nice work on this so far!

1 Like

I’d love some pointers on how that SSAO shader works. At this point I’ve tried basically everything I can think of in terms of scaling it down and nothing works. And sometimes it gets really dark, like it’s just dumping the depth buffer to the color buffer.

Here is one such dark image

I’ve haven’t tested this on devices yet but on desktop, it gives the following results:

Without this workaround:

With this workaround

Project: https://playcanvas.com/editor/scene/1343984

It does change the size of the canvas so there may be some limits on mobile devices with this method

1 Like

This is interesting. I’m not exactly sure why this improves quality, since the ssao script doesn’t take into maxPixelRatio into account and device.width/device.height are unaffected.

But as far as I can tell it’s still going to be wrong if you try to take a screenshot at a different aspect ratio.

Changing the maxPixelRatio changes the size of the canvas and therefore the device.widith/height are changed to match.

Yes, it does have the limitation of a different aspect ratio.

You could try changing the resolution mode to fixed/fill mode to none and then changing the canvas size for the frames where the screenshot is taken

1 Like

It doesn’t seem like changing maxPixelRatio has any effect on the size reported by the pc.app.graphicsDevice… maybe I’m missing something?

image

This is using FILLMODE_KEEP_ASPECT. The other fillmodes look the same:
image

I actually didn’t realize the app had a fixed aspect ratio, I’m wondering if I can just change it for a single frame to get the screenshot.

Ah, it’s device specific. It only uses the maximum device pixel ratio available on the device. In your case, it seems to be 1 https://github.com/playcanvas/engine/blob/dev/src/graphics/graphics-device.js#L265

I’ve updated this project to temporary resize the canvas resolution for the screenshot which should work with devices that have a max pixel ratio of 1

Relevant code:

    var onTakeScreenshot = function () {
        var w = this.app.graphicsDevice.width;
        var h = this.app.graphicsDevice.height;

        this.app.setCanvasResolution(pc.RESOLUTION_FIXED, w * 4, h * 4);

        this.triggerScreenshot = true;
        this.cameraEntity.enabled = true;   
    };
Screenshot.prototype.postRender = function () {
    if (this.triggerScreenshot) {
        this.takeScreenshot('screenshot');   
        this.triggerScreenshot = false;
        this.cameraEntity.enabled = false;
        this.app.setCanvasResolution(pc.RESOLUTION_AUTO);
    }
};
3 Likes