What do you mean transfer the .container div to the server? Not sure how you can transfer a browser rendered element to the server.
To save the image-data there to a server you will have to communicate somehow from your JS to the server, usually that is done using AJAX. This isn’t mostly related to Playcanvas and you will have to prepare your PHP server to receive the call as we;;.
Ok will have to write the code then ( I have an example that is purely based on an applet-free HTML-form structure). In that an #/id-element (Div-placed image) is submitted flawless to a save.php receiverpage. Our PC-classic ./container within the exported applet will not submit the imagedata in the same way (at least in my first 3-5 different tries or so) will elaborate with the written code in some hours
minor tech-disclaimer ->> php ahead: Yes, as you mentioned @Leonidas either AJAX or in this case PHP will have to be used.
This is/was actually where I started: https://github.com/niklasvh/html2canvas
and within it one will find two pages in the form-structure I cf’ed earlier/above. Here is a shortened version of the original github-example - a Div referenced by ‘target’:
<div id="target">
<table cellpadding="10">
<td colspan="2">
This is sample implementation
</td>
</table>
</div>
below that a encapsulated js-function calls the ‘target’-div (area to be screen-shot sort of say).
<script type="text/javascript">
function capture() {
$('#target').html2canvas({
onrendered: function (canvas) {
//Set hidden field's value to image data (base-64 string)
$('#img_val').val(canvas.toDataURL("image/png"));
//Submit the form manually
document.getElementById("myForm").submit();
}
});
}
</script>
Going back to the top the form-structure (still in first index.php) that uses this to send/submit/(transfer image data):
Finally the save.php, in the receiving end, treats the data (from the POST):
<?php
//Get the base-64 string from data
$filteredData=substr($_POST['img_val'], strpos($_POST['img_val'], ",")+1);
//Decode the string
$unencodedData=base64_decode($filteredData);
//Save the image
file_put_contents('img.png', $unencodedData);
?>
So this is actually where my real trouble starts:
With a premise of altering the <div id="target"> to become the/(our) classic full in-applet ‘
’ (?), I wish to …
take a screenshot of all within the PlayCanvas rect (in this case both background, robot and black damage-UVmap-plane)
for finally using the PHP-saved image, to crop out the black plane.
As such ‘the damage’ can be saved and fetched afterwards/at different session etc (I am not pursuing a game, but this workthrough could actually be worthful for many game-devs in here regardless )
Actual problem
A) Converting ‘target+img_val’ to ‘container+container’ like:
$('#target').html2canvas({
onrendered: function (canvas) {
//Set hidden field's value to image data (base-64 string)
$('#img_val').val(canvas.toDataURL("image/png"));
//Submit the form manually
document.getElementById("myForm").submit();
}
});
into:
Mouse.prototype.capture = function () {
$('.container').html2canvas({
onrendered: function (canvas) {
//Set hidden field's value to image data (base-64 string)
$('.container').val(canvas.toDataURL("image/png"));
//Submit the form manually
document.getElementById("myForm").submit();
}
});
};
does not seem to help me at all. Both ‘body’ and ‘.container’ within the css:
does not seem to represent the actual 3D-content of the applet anyway (?)
What to do to ‘transfer’ the image-data from the rendering of the PlayCanvas-content behind HTML-layer (now body, container, buttons etc.) to a virtual placeholder, that can then be send/submitted to the save.php?
ok, nice … as such ‘pursuing first approach’ with handling a renderTexture only.
Can tell the ‘Capturing a screenshot’ example have been altered/enhanced into ‘Capturing screenshot by camera’ - will take a look at it
press the Take screenshot button, an error msg will prompt, as the ‘layerName’ is missing.
The obvious debug is to fill that with our texture in focus ‘RenderTexture’, and that will work quite well.
If you try to, and re-launch you will find that all turns black.
You might have a hunch that my personal learning-curve in regards to handling; textures, colorBuffer, canvas.toDataURL, renderTarget/base64, PHP/base64_decode etc, … still is ‘in the making’ sort of say [no problem with that admission].
So bear with me here if the following sounds obvious to you:
95 % of the problem seems fixed, and now I just need to keep WorldCamera functional and not just get the black ‘RenderTexture’ at launch.
(in other words - the last 5 % is quite simply: How to handle the RenderTexture within the Robot Damage-project base, without the ‘turning black’ issue)
PS: Yes I commented out the “material.emissiveMap = layer.renderTarget.colorBuffer;” as it can work without it.
Well, it is only ‘sort of missing’ (at script editor-placeholder):
and if you look at the image above, you will see that the renderLayerToTexture script is using it. But (as writen) the cameraToTexture script makes a camera-conflict when one tries to set the layerName:
Anyways; will await @Leonidas (if he has an explanation/a fix), but otherwise try your ‘multiple textures’ approach in the meanwhile
PS: @yaustar - nice that I can check out that resolution example as well (namely, not sure the link for that announcement-post you made about it worked)
Sorry can’t study your full project, but what I would do if I was doing this:
Get a reference to the layer holding the character damage texture (ApplyTexture.js):
var colorBuffer = layer.renderTarget.colorBuffer;
Convert it to base64 using a canvas:
this.canvas = window.document.createElement('canvas');
this.context = this.canvas.getContext('2d');
var gl = this.app.graphicsDevice.gl;
var fb = this.app.graphicsDevice.gl.createFramebuffer();
var pixels = new Uint8Array(colorBuffer.width * colorBuffer.height * 4);;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorBuffer._glTexture, 0);
gl.readPixels(0, 0, colorBuffer.width, colorBuffer.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
// first, create a new ImageData to contain our pixels
var imgData = this.context.createImageData(colorBuffer.width, colorBuffer.height); // width x height
var data = imgData.data;
// Get a pointer to the current location in the image.
var palette = this.context.getImageData(0, 0, colorBuffer.width, colorBuffer.height); //x,y,w,h
// Wrap your array as a Uint8ClampedArray
palette.data.set(new Uint8ClampedArray(pixels)); // assuming values 0..255, RGBA, pre-mult.
// Repost the data.
this.context.putImageData(palette, 0, 0);
this.context.drawImage(this.canvas, 0,0);
var image = this.canvas.toDataURL('image/png');
// you can now post the image, base64 encoded, var to your server
Yes, I was around that code cluster/approach as well. Ended up importing assets from the Robot Damage example to your Capture Screenshot from Camera example, and it worked after a lot of small adjustments.
Thx again to both of you - would not have made it without the help.
My projects are usually much much larger than the chunks I write in here, and in 18 out of 20 cases I find the solutions myself - naturally my issues + forum-Q’s start at places without any prior examples of my ideas/challenges
PS: @yaustar, the resolution example is actually extremely relevant - media conglomerats like YouTube, HBO and Netflix use it often to show pixelated streams instead of nothing.