☑ Save specific rendered entities to image


I would like to save to image (screenshot) part of what is rendered in canvas (e.g. without background, HUD etc.)

What is the proposed way to achieve this?

On a single frame iterate through all objects attached to this.app.root, disable models on unwanted entities and save canvas to url?


You could have tags on entities, that will allow you to mark only things that have to be in screenshot, or perhaps better the opposite - thing that shall not be in a screenshot. So you findByTag them, and hide, then render to texture.
You can get that texture readPixels data, and put it into off screen canvas2d, then toDataUrl will give you PNG of it.
Which then as blob can be put in window.open that will basically download it :slight_smile:

1 Like

Great, thanks @max for the pipeline.

@max is it possible to render specific elements to some offscreen canvas?

I would like to take a screenshot of the current without GUI elements. With my current code I get a flickering for some milliseconds as I am enabling/disabling entities on the main canvas. I am doing something like this:





    }.bind(this) );

}.bind(this), 50);

And the actual function that does the trick:

Screen.prototype.renderToTexture = function(callback) {

   // Create a render target with a depth buffer
    var colorBuffer = new pc.Texture(this.app.graphicsDevice, {
        width: this.app.graphicsDevice.canvas.width,
        height: this.app.graphicsDevice.canvas.height,
        format: pc.PIXELFORMAT_R8_G8_B8
    var renderTarget = new pc.RenderTarget(this.app.graphicsDevice, colorBuffer, {
        depth: true

    this.entityCamera.camera.renderTarget = renderTarget;

        var gl =  this.app.graphicsDevice.gl;
        var fb = gl.createFramebuffer();
        var pixels = new Uint8Array(colorBuffer.width * colorBuffer.height * 4);
        gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
        gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorBuffer._glTextureId, 0);
        gl.readPixels(0, 0, colorBuffer.width, colorBuffer.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
        this.entityCamera.camera.renderTarget = null;

        var canvas = document.createElement("canvas"),
            ctx = canvas.getContext("2d"),
            img = pixels;

        canvas.width = colorBuffer.width;
        canvas.height = colorBuffer.height;
        // first, create a new ImageData to contain our pixels
        var imgData = ctx.createImageData(colorBuffer.width, colorBuffer.height); // width x height
        var data = imgData.data;

        // copy img byte-per-byte into our ImageData
        for (var i = 0, len = colorBuffer.width * colorBuffer.height * 4; i < len; i++) {
            data[i] = img[i];
        // now we can draw our imagedata onto the canvas
        ctx.putImageData(imgData, 0, 0);

        var image = canvas.toDataURL('image/png');
        window.location.href = image.replace('image/png', 'image/octet-stream');   
    }.bind(this), 50);


Everything works fine but I would like to do all that on an offscreen canvas, outside of the main render loop.

The problem you are having is using timeouts. There shall not be any async.

Basically you need to set render target for the camera, call renderer yourself, and then set renderTarget back to null (back buffer).

Regarding taking image from read pixels. First of all, you are doing a lot of allocations there - you want to avoid that by all costs. Pre-allocate or allocate on first need / buffer size changes. Then re-use array buffers, etc.
With readPixels, you can actually crate generic array buffer, and multiple typed array views to it, that allows you to have one buffer of data, but read it with different data types. So you can create ImageData using clamped uint8, that way you don’t need to loop through pixels manually.

Thanks very much @max! Pretty much understood.

One question, which function do I use to call the renderer myself outside of the main loop?

That is most tough question here :smiley:
You can start investigating what exactly you need from here: https://github.com/playcanvas/engine/blob/master/src/framework/application.js#L744

Thanks @max, this did the trick:

this.app.render(this.app.scene, this.entityCamera.camera);
1 Like

Felt obliged to share the corrected version per @max 's comments code for anyone interested.

There is method Save on the following script that takes a boolean true/false if you would like to download the screen grab and a callback that returns a base64 represntation of the image. Before taking the image, all entities without the tag shareable are hidden.

This method doesn’t need the Preserve Drawing Buffer property to be enabled.

var Share = pc.createScript('Share');

// initialize code called once per entity
Share.prototype.initialize = function() {

    // prepare allocations for screen grab
    // Create a render target with a depth buffer
    this.colorBuffer = new pc.Texture(this.app.graphicsDevice, {
        width: this.app.graphicsDevice.canvas.width,
        height: this.app.graphicsDevice.canvas.height,
        format: pc.PIXELFORMAT_R8_G8_B8
    this.renderTarget = new pc.RenderTarget(this.app.graphicsDevice, this.colorBuffer, {
        depth: true
    this.gl =  this.app.graphicsDevice.gl;
    this.fb = this.gl.createFramebuffer();
    this.pixels = new Uint8Array(this.colorBuffer.width * this.colorBuffer.height * 4);
    this.canvas = document.createElement("canvas");
    this.canvasFinal = document.createElement("canvas");

    this.entityCamera = this.app.root.findByName('Camera');

Share.prototype.save = function(downloadImage, callback){

    this.nonSharableEntities = this.app.root.find(function(node) {
        return node.tags.has('shareable') === false;


    this.renderToTexture(downloadImage, function(base64){

        if (typeof callback === "function") {

    }.bind(this) );


Share.prototype.enableEntities = function(state){

    if( this.nonSharableEntities.length !== undefined){

        this.nonSharableEntities.forEach(function(entity, index){

            if( entity.model !== undefined ){

                if( state === false ){
                    entity.oldModelState = entity.model.enabled;
                    entity.model.enabled = state;
                    entity.model.enabled = entity.oldModelState;


        }, this);



Share.prototype.renderToTexture = function(downloadImage, reEnableEntities) {

    this.entityCamera.camera.renderTarget = this.renderTarget;
    this.app.render(this.app.scene, this.entityCamera.camera);
    this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.fb);
    this.gl.framebufferTexture2D(this.gl.FRAMEBUFFER, this.gl.COLOR_ATTACHMENT0, this.gl.TEXTURE_2D, this.colorBuffer._glTextureId, 0);
    this.gl.readPixels(0, 0, this.colorBuffer.width, this.colorBuffer.height, this.gl.RGBA, this.gl.UNSIGNED_BYTE, this.pixels);
    this.entityCamera.camera.renderTarget = null;

    var ctx = this.canvas.getContext("2d"),
        img = this.pixels;

    this.canvas.width = this.colorBuffer.width;
    this.canvas.height = this.colorBuffer.height;

    // Get a pointer to the current location in the image.
    var palette = ctx.getImageData(0,0,this.colorBuffer.width, this.colorBuffer.height); //x,y,w,h
    // Wrap your array as a Uint8ClampedArray
    palette.data.set(new Uint8ClampedArray(img)); // assuming values 0..255, RGBA, pre-mult.
    // Repost the data.

    // rotate
    var ctxFinal = this.canvasFinal.getContext("2d");

    this.canvasFinal.width = this.colorBuffer.width;
    this.canvasFinal.height = this.colorBuffer.height;

    ctxFinal.drawImage(this.canvas, 0,0);

    var image = this.canvasFinal.toDataURL('image/png');


    if( downloadImage === true ){
        window.location.href = image.replace('image/png', 'image/octet-stream');   


It wont handle resizing the window unfortunately. So on mobile if you open app in portrait, and then switch to landscape and hit “save”, something wont be right, as buffers and render target are created with initial resolution, and not updated later on.

Thank you for sharing the solution! It probably deserves to be a base to make a tutorial project.

1 Like

Yes, right it won’t handle resizing! It was developed for a portrait only application. But anyhow, it can be a base.

1 Like

sorry for reviving such an old post, but this is almost exactly what I need and I can’t get it to work: it always returns a gray image!.
I suspect it has to do with the new layers system in the engine, but although I’ve found this recent tutorial: https://playcanvas.com/project/560797 it’s not the same as this solution. (I need the base64 representation of an off-screen capture from a camera, just one frame)

I have added the shareable tag to the geometry, but being such a noob in this engine perhaps there is something else I am missing.

Here is a sample project: https://playcanvas.com/editor/scene/679475
when you lauch it you see some geometry and a Take Screenshot button that opens the resulting image, but it is always some gray, not what the camera shows.

Any help would be much appreciated.

I’m trying to do this too and got as far as proving that my camera is rendering what I expect it to be rendering with the layer system but can’t get the renderTexture data to an external canvas. https://playcanvas.com/project/605131/overview/capturing-screenshot-from-camera

@Leonidas I don’t suppose you have a working project with this code?

Nevermind, finally done it after I realise that _glTextureId was undefined: https://playcanvas.com/project/605131/overview/capturing-screenshot-from-camera

Edit: Thanks Leonidas :slight_smile:

1 Like

Is there a way to save the image with Anti-Aliasing? My result is not exactly the same as shown in the canvas.