I asked ChatGPT and it says this:
To set a PlayCanvas camera to a world-to-camera matrix (especially one from OpenCV convention), you’ll need to:
- Invert the matrix:
- In OpenCV, a “world-to-camera” matrix is essentially a view matrix. PlayCanvas, on the other hand, represents the camera as an Entity with a transform (position + rotation). If you invert your view matrix, you get the camera’s world transformation (i.e., where the camera is in the world).
- This is the same trick used in older forum posts: invert the matrix, then extract translation + rotation. PlayCanvas Discussion
- Extract translation and rotation:
- Once you have the inverted matrix, use a
pc.Mat4in PlayCanvas. You can callgetTranslation()to get the position, andgetEulerAngles()(orgetRotation()for a quaternion) to get the rotation. PlayCanvas Discussion - Then set the camera entity’s position and rotation accordingly:
const mat = new pc.Mat4();
mat.set(yourInvertedArray); // set from your inverted matrix
const pos = mat.getTranslation();
const eulers = mat.getEulerAngles();
cameraEntity.setPosition(pos);
cameraEntity.setEulerAngles(eulers);
- Handle coordinate system differences:
- OpenCV and PlayCanvas (or typical 3D engines) often use different coordinate conventions (e.g. axis directions). You might need to remap or flip some axes (e.g. invert Y or Z) before or after inverting the matrix, depending on how your network’s “world-to-camera” matrix is defined.
- In fact, in vision literature, when converting from OpenCV to graphics coordinate systems, people often apply a flip on certain axes. Pure
- Set the camera projection if needed:
- Make sure your camera’s field of view (FOV) / projection matches the network’s assumptions. If your OpenCV matrix includes intrinsics, you may need to convert those into a PlayCanvas FOV.
- There’s some precedent: in the old forum post, people compute FOV from a projection matrix by using an element from the projection matrix.