I wonder if you can manually patch the other cameras with the XrManager for it to work with more than one camera? Not something I’ve tried before nor know what side effects could be.
@max Do you think this would be possible? Maybe have an array of cameras instead of single one in XrManager so that start() on the XrManager takes an array of cameras instead?
XR is designed to render the same scene into multiple viewports of the same render target, which is then interpreted by XR compatible devices as a 3D view. The internal rendering loop is optimized for this case, and only one list of objects is maintained / culled / rendered. I’m not sure what the reason to specify custom cameras with different layer arrangements would be?
If you need a more flexible rendering, you should probably not use XR for it, but simply create multiple cameras, that render to different viewports of the same / different render targets. This gives you the ultimate flexibility.
@yaustar@mvaligursky Thank you for your inputs. My use case is a space simulator, rendering a scene with a very large z range (from a few centimeters up to planetary scales, indeed up to the stars at quasi infinite distance). To prevent potential depth buffer precision issues, I was planning to slice the z range into 3 parts: the starry background (a unit sphere with a non-moving camera at the center), the planet, and the immediate surroundings of the viewer. In other words, a multipass depth rendering technique. Since the z range is part of the camera’s projection matrix, I thought that I would need more than one camera to achieve this. But maybe this slicing is unnecessary, premature optimization.
In any case I have to think more about this. Maybe it is possible to use a single camera for all of this. I have to read more through the API code.
I would start with a 3 layers, and assign meshes to exactly one of them: Far, Middle, Near - based on the z-range slicing you mentioned. Then have a single camera that renders all 3 layers.
Then later, when you hit an issue with z-precision, you can create additional cameras, each rendering a single layer this way, listed in the order of rendering (specified by Priority property on the camera)
0: FarCamera, renders Far layer, clears both color and depth
1: MidCamera, renders Mid layer, clears depth only
2: NearCamera, renders Near layer, clears depth only
If this is actually something you need to run as an XR experience, you should probably use just a single camera, with 3 layers as mentioned, instead of 3 cameras. And change settings of Mid and Near layer to clear depth instead.
This option already gives a lot of flexibility (even though it doesn’t seem to allow the minimization of the frustum near/far distance for greater depth precision). I’m actually surprised how flexible playcanvas is. Big thumbs up and thanks a lot for your valuable input. I have something to go on! Cheers!