Volumetric video playback

I’ve been tasked to look into if it’s possible to play back volumetric video via Playcanvas in AR.

I have a basic example of 4DViews web player library playing a volumetric video via Three.js and I’m trying to figure out what the process would be to convert that to Playcanvas? Does anyone have any tips on this?

The Three.js-based project is here:

I have approached 4DViews to ask for guidance on their library and using it with Playcanvas as they don’t have much in terms of documentation, just a basic tutorial video.

The actual video footage is either in (proprietary?) .4DS format - which the web player plays, or in Alembic (.abc). Audio is carried separately. Samples here: 4Dviews - Volumetric video capture technology

Any help would be appreciated.

@slimbuck or Gustav, do you have any ideas?

Having a quick look here: https://github.com/pjburnhill/web4ds-example/blob/main/js/script.js#L140

It looks like a mesh is created but I can’t really tell what format it is in.

Looks like you would have port this file to work with PlayCanvas classes and objects: https://github.com/pjburnhill/web4ds-example/blob/5686c0b6c1c6cf5861adcbdd77b45bcc0686952b/web4dv/web4dvImporter.js

I think it creates a mesh where it animates the vertices over time

Our example here could help: https://playcanvas.github.io/#/graphics/mesh-generation

Thanks @yaustar. I received this from 4DViews support:

in the js files, we did our best to separate the three.js code from the decoding engine part.
I don’t know about Playcanvas, but if it’s quite similar to three.js it should be possible to replace the model4D_three.js with an Playcanvas implementation.

There is also some small parts in the importer.js files about the audio management that we couldn’t separate easily that deals with three.js.

I know that some people successfully implemented a babylon.js version, so I’m confident it’s possible with Playcanvas too.

This is the file he references:
https://github.com/pjburnhill/web4ds-example/blob/main/web4dv/model4D_Three.js

It seems that this is potentially quite a big job to get it running in Playcanvas?

It doesn’t seem that bad to port over. Ultimately its creating a mesh with texture UVs.

Actually I don’t think this will be very difficult. Famous last words perhaps… If I get some free time I might take a look.

Thank you both, I will dig into what you’ve shared, even though it’ll probably be a bit over my head for now.

Hi again,

I also have a Microsoft HCap example project running in Three.js, I wonder if this would be easier to implement?

About the same I think? You would still need to port the library HoloVideoObject as the one in that example is purely for three.js

Hi @yaustar

Would you be able to ballpark break down the process of converting the above Hcap library to Playcanvas, what would be the main steps and potential issues? I’m trying to figure out how big of a job this would be and what sort of expertise we would need to get this done externally.

I have a fairly good grasp of JS but I have no idea what is involved when converting an existing library to another platform, especially when it’s to do with WebGL.

Thanks,
PJ

Can you link to the library please? The project you’ve linked to only has the minified version of it.

@yaustar I’ve added the source files to src directory:

These files come from MS MixedRealityCaptureStudios DevKit here:
https://github.com/MixedRealityCaptureStudios/DevKit

You can request access here if needed:
https://mrcs.microsoftcrmportals.com/en-US/signup/

I’ve requested access.

Looking at the minified version in the meantime:

You can see areas in the library where the three.js library is used:

It’s those areas that you need to port across to PlayCanvas (if possible, there may not be a 1 to 1 mapping of features) and areas of the code where that data/variables are being used.

Offhand, most of it is about creating a mesh and a material.

1 Like

This buffer bit does worry me as it’s low level and not an area I’m familiar with:

I can’t really tell if it’s editing current geometry or reading in the video data?

I can imagine this being a reasonably sizable job for a non-graphics engineer as they would need to understand how three.js works and be able to map it to the PlayCanvas feature set.

1 Like

Thanks @yaustar for looking into it. It might be a bit too big of a job without guaranteed success. Might have to stick to Three.js on this.

@pjburnhill we’ve actually done this in both streaming and preloaded format and it works really well. You need to time sync the mesh data and the texture data, but it’s not a huge task to do so

2 Likes

Oh, do tell! Would you be willing to share an example? Was that the MS Hcap format you used?

I believe it was for a 4DView file but as long as it’s an open format it should doable. I can’t share the link publicly but if you drop me a DM I can send you something

Just revisiting this, would it be much more straightforward to implement the non-three.js (interacts directly with WebGL) version into Playcanvas as seen below?

https://github.com/pjburnhill/hcap-threejs-example/blob/main/src/API.md

Script here:
https://github.com/pjburnhill/hcap-threejs-example/blob/main/src/umd-bundles/holo-video-object-umd.js

It’s more or less the same work I think unless you are planning to use it as is (which I’m not sure if it’s possible)?