Camera render to element not maintaining image

So, I have a script that will render a camera to an element, but when rendering the texture of the element is lost.
Screenshot 2024-03-28 5.18.57 AM
this is what I need the render of the camera to look like.

but it just fills out a cubic area of where the image is. For the aesthetic of my game, this doesn’t feel right as it looks like a screen instead of a map. Here is the render camera to element script:

var RenderCameraToElement = pc.createScript('renderCameraToElement');

RenderCameraToElement.attributes.add('elementTags', {
   type: 'string', array: true, description: 'Render to elements that have these tags'

RenderCameraToElement.attributes.add('renderResolution', {
   type: 'vec2', description: 'Resolution to render at'

RenderCameraToElement.attributes.add('renderOnce', {
   type: 'boolean', description: 'Renders the first frame only'

// initialize code called once per entity
RenderCameraToElement.prototype.initialize = function() {
   this.renderOnceFrameCount = 0;

   this.on('destroy', function () {
   }, this);

RenderCameraToElement.prototype.update = function (dt) {
   // Workaround as it takes a few frames before the 
   // camera has rendered the entities
   if (this.renderOnce) {
       this.renderOnceFrameCount += 1;
       if (this.renderOnceFrameCount > 4) {
           this.entity.enabled = false;

RenderCameraToElement.prototype.createNewRenderTexture = function() {
   var device =;

   // Make sure we clean up the old textures first and remove 
   // any references
   if (this.texture && this.renderTarget) {
       var oldRenderTarget = this.renderTarget;
       var oldTexture = this.texture;
       this.renderTarget = null;
       this.texture = null;
   // Create a new texture based on the current width and height of 
   // the screen reduced by the scale
   var colorBuffer = new pc.Texture(device, {
       width: this.renderResolution.x,
       height: this.renderResolution.y,
       format: pc.PIXELFORMAT_R8_G8_B8,
       autoMipmap: true
   colorBuffer.minFilter = pc.FILTER_LINEAR;
   colorBuffer.magFilter = pc.FILTER_LINEAR;
   var renderTarget = new pc.RenderTarget(device, colorBuffer, {
       depth: true,
       flipY: true,
       samples: 2
   }); = renderTarget;
   this.texture = colorBuffer;
   this.renderTarget = renderTarget;


RenderCameraToElement.prototype.assignTextureToElements = function (texture) {
   // Assign the texture to the elements
   var elementEntities;

   for (var i = 0; i < this.elementTags.length; ++i) {
       elementEntities =[i]);
       for (var j = 0; j < elementEntities.length; ++j) {
           elementEntities[j].element.texture = texture;

// swap method called for script hot-reloading
// inherit your script state here
// RenderCameraToElement.prototype.swap = function(old) { };

// to learn more about script anatomy, please read:
// ```

Screenshot 2024-03-28 5.49.41 AM
setting the opacity kind of works, but it has clipping and overlap off the edges, and still doesn’t follow the texture shape, and also isn’t easy to see.

As you can see in your code, you are basicaly overwriting texture with your texture rendered from camera.

to achieve your goal, create parent element for your camera-rendered-element, and set a mask to that element.

1 Like

I did this but the texture just disappeared and no render from the camera was done.

could you show me a reference, video, or example of how to correctly mask the parent? In my case the parent is a group with a background image, and the image I want it to render to.