I’ve been tasked to look into if it’s possible to play back volumetric video via Playcanvas in AR.
I have a basic example of 4DViews web player library playing a volumetric video via Three.js and I’m trying to figure out what the process would be to convert that to Playcanvas? Does anyone have any tips on this?
The Three.js-based project is here:
I have approached 4DViews to ask for guidance on their library and using it with Playcanvas as they don’t have much in terms of documentation, just a basic tutorial video.
The actual video footage is either in (proprietary?) .4DS format - which the web player plays, or in Alembic (.abc). Audio is carried separately. Samples here: 4Dviews - Volumetric video capture technology
Thanks @yaustar. I received this from 4DViews support:
in the js files, we did our best to separate the three.js code from the decoding engine part.
I don’t know about Playcanvas, but if it’s quite similar to three.js it should be possible to replace the model4D_three.js with an Playcanvas implementation.
There is also some small parts in the importer.js files about the audio management that we couldn’t separate easily that deals with three.js.
I know that some people successfully implemented a babylon.js version, so I’m confident it’s possible with Playcanvas too.
Would you be able to ballpark break down the process of converting the above Hcap library to Playcanvas, what would be the main steps and potential issues? I’m trying to figure out how big of a job this would be and what sort of expertise we would need to get this done externally.
I have a fairly good grasp of JS but I have no idea what is involved when converting an existing library to another platform, especially when it’s to do with WebGL.
It’s those areas that you need to port across to PlayCanvas (if possible, there may not be a 1 to 1 mapping of features) and areas of the code where that data/variables are being used.
Offhand, most of it is about creating a mesh and a material.
I can’t really tell if it’s editing current geometry or reading in the video data?
I can imagine this being a reasonably sizable job for a non-graphics engineer as they would need to understand how three.js works and be able to map it to the PlayCanvas feature set.
@pjburnhill we’ve actually done this in both streaming and preloaded format and it works really well. You need to time sync the mesh data and the texture data, but it’s not a huge task to do so
I believe it was for a 4DView file but as long as it’s an open format it should doable. I can’t share the link publicly but if you drop me a DM I can send you something
Just revisiting this, would it be much more straightforward to implement the non-three.js (interacts directly with WebGL) version into Playcanvas as seen below?