Mesh and texture interop with custom WebGL library

We have an existing (WebGL-based) library for playback of volumetric video content in a web environment that we’re interested in integrating with PlayCanvas so that users can include this type of content into their PlayCanvas projects

The way this integration would work is essentially on each update/tick of the app, our library will produce an updated mesh and texture pair for the current frame of playback that we then need to use to populate a PlayCanvas mesh in the scene, so it displays the animated content.

In other integrations we’ve done such as with Three.js, we get access to the underlying WebGL objects (texture handle, vertex and index buffer handles) and our library populates them directly with WebGL APIs. I’ve managed to put together a proof-of-concept PlayCanvas example of this workflow as well, but it relies on digging out the private WebGL implementation objects from PlayCanvas objects like: mesh.vertexBuffer.impl.bufferId, material.diffuseMap.impl._glTexture etc.

So, it seems like this approach could work but it raises a few concerns:

  • Is code that interacts with the private implementation of PC engine types “allowed” on the platform? Would a user’s project be rejected or something if they included code like this?
  • If we had an implementation that was known to work with a particular version of PC, how likely is it that the underlying implementation may change and break our integration? Can a user target a particular version of PlayCanvas for the lifetime of their project or is it subject to change any time?
  • Would it make sense to maybe propose some “official” features to make this type of workflow more official and not have to depend on hacking into the private PC engine implementation? The addition of GLBufferAttribute to Three: GLBufferAttribute – three.js docs (threejs.org) was something that allowed our integration to take advantage of actual supported features and rely less on these types of hacks :smiley:
1 Like

Thanks for posting and yes, we would love to help as much as we can :slight_smile:

It wouldn’t be rejected but private implementation can change at any time unfortunately.

For editor projects, they can technically target any engine build for publishing using the REST API and target a specific version in the launch tab. However, this is generally reserved for testing/emergencies.

Realistically, developers would be using the ‘stable’/current version of the engine, previous version or preview.

This would be a question for @mvaligursky or @slimbuck I think :slight_smile:

One way to do this is as you said dig out or internal members, and populate data. This is subject of change, and as we’re working on webgpu, we’ve changed and will change more of these - so I would not recommend this path most likely.

The other path would be that you do your Webgpu calls on your side, allocate gl textures / vertex buffers and similar, and inside some callback perhaps at the end of camera rendering, you take over the gl context and run webgl commands directly. After that you’d need to call few functions on our device class to clean up internal state, to make sure that follow up rendering sets all it needs without depending on the existing state. This could be doable reasonably safely I think.

The best option would be to copy data to our contains using public API though. You specifically mentioned the mesh data, and here we have two APIs. One is where you give us arrays of positions, normals uvs and similar attributes, and we build a mesh. But other which is likely more useful for you is to create a VertexFormat, which is similar to GLBufferAttribute you mentioned, and create a VertexBuffer with this format, and a raw data of the VB. See VertexFormat | PlayCanvas API Reference When you have that constructed, you can create vertex buffer with it, and pass in data using .setData on it. VertexBuffer | PlayCanvas API Reference

2 Likes

Hi @eodabashian,

So happy to hear you’re working on this. I was planning to have a go integrating volumetric video myself when I got the time. Are you able to share your progress at all? Would love to take a look.

Thanks!

Hi guys, thanks for all the replies!

@mvaligursky I agree that using public APIs would be ideal if possible. The issue though is that our library already has the data on the GPU in WebGL texture and buffer objects, so to be able to use VertexBuffer.setData (and similarly Texture.lock/unlock I guess) we’d have to pull the data back from the WebGL and reupload it via these APIs which probably isn’t going to be very performant at 30 FPS.

The end-of-camera-rendering callback idea sounds interesting if you think we could do it in a reasonably stable/future-proof way. This callback is not something that exists currently though, is that correct?

@slimbuck This work is for playing content produced by Microsoft Mixed Reality Capture Studios. We generally only give access to the code and plugins via our private GitHub repo. If you guys are interested in getting on-boarded so we can share access, you can email my username @microsoft.com. We can also continue the discussion from this thread about the best way to get this PlayCanvas plugin/integration working :slightly_smiling_face:

Thanks!
Evan

1 Like

Created an email for this :slight_smile: