[SOLVED] Convert RenderTexture's colorBuffer to base64?

Hi there,

with normal pc.Textures it’s quite simple to convert to the base64 format - I would do it like that:

    public getCopyOfTexture ( myTexture : pc.Texture ) {
        let canvas = document.createElement( 'canvas' );
        let ctx = canvas.getContext( '2d' );
        canvas.width = 512;
        canvas.height = 512;
        var img = new Image();
        img.src = myTexture.getSource().src;
        img.setAttribute( 'crossOrigin', 'anonymous' );
        ctx.drawImage( img, 0, 0 );
        return canvas.toDataURL( "image/png" );

But it’s not possible with colorBuffers created for the RenderTarget purposes, as they have an empty source:

So, is there any alternative method I can use to render my RenderTexture’s colorBuffer on a canvas? Or, more straightforward - is there any method to convert pc.Texture to base64, that don’t include using it’s getSource method?

I’ve done a version of saving a renderTexture to a PNG which involved copying it to a canvas here: ☑ Save specific rendered entities to image

Relevant code here: https://playcanvas.com/editor/code/605131?tabs=17966832

1 Like

That’ an extremely useful help, man. Thanks!

While trying to use your method, I cannot get it right, maybe because some pixel format difference.

I’m using PIXELFORMAT_R8_G8_B8_A8 for the RenderTexture, and here’s my version of your code:

With that, I always get two images rendered on top of one another:
The one underneath is the same render, but upside down.

The most mysterious think is that changing context vertical translation and context vertical scale values doesn’t move ‘both’ rendered layers - it scales and moves only the one on the top.

Without seeing the project setup, it’s going to be hard to debug. Can you link to the project please?

It doesn’t move any layers. It changes the context, not the layer.

Thinking about it, it’s VERY possible that the context needs to be cleared before each render frame as you have an alpha channel. So you are rendering one frame, transforming the context, rendering the next frame, transforming the context, etc.

Try clearing the context before applying the render texture to it.

Cannot make the project public, but maybe more info will be enough?

So, the RenderTexture is created in another place in the game…

            rtw.colorBuffer = new pc.Texture( pc.app.graphicsDevice, {
                width: 256,
                height: 256,
                format: pc.PIXELFORMAT_R8_G8_B8_A8,
                autoMipmap: true
            } );
            rtw.colorBuffer.minFilter = pc.FILTER_LINEAR;
            rtw.colorBuffer.magFilter = pc.FILTER_LINEAR;

later, the target is created:

        this.layer.renderTarget = new pc.RenderTarget( pc.app.graphicsDevice, RenderTemporaryWorld.colorBuffer, {
            colorBuffer: RenderTemporaryWorld.colorBuffer,
            depth: true
        } );

…and I render it to the UI, and it looks like this:
Then, I have a method that creates a one-time screenshot (I pasted the content above). For now the canvas, the context and the framebuffer are created and live only within this function.

That said, as I don’t reuse the context and I do the rendering only once, it cannot be any remaining pixels from any other render.

Clearing the context just before putting the imageData didn’t change anything, the resulting image is still double-rendered:

Can you replicate this in a new public project?

Here it is, based on the example you provided: https://playcanvas.com/editor/scene/737110

(also, made a commit so the differences can be viewn)

I’m a little stumped. I think it is to do with the way I flip the context or maybe it’s double wrapping on the writing to the context.

The issue doesn’t seem to be with the renderTexture itself as that looks fine when I apply it to a material.

1 Like

OK, thanks man. I got past that by manually inverting rows in the pixel array… :’).

1 Like