I’m currently developing an WebApp where I present 360° environments (no additional models).
I’m currently using a sphere with a texture, similar to the official WebXR 360 Image Demo: https://developer.playcanvas.com/en/tutorials/webxr-360-image/
Another way of doing that would be to use different skyboxes.
Would that skybox-workflow offer advantages in terms of filesize/quality payoff?
In another demo it seemed to me that the skyboxes seem to be of quite small filesize and still keep the quality. But of course that could also be mipmap / texture compression setting.
But maybe somwone with more know-how on how the skyboxes work internally could offer me an insight on this.
As always, thanks for any support.
Thanks for your response. I guess with texture per face you mean the 6 sides of the cube, right?
That of course makes sense, but as it will be part of a offline capable pwa I really have to keep overal filesizes in mind. So besides that I guess the image filtering / compressing is similar in both ways (mip-map, BASIS …) ?
As an alternative, you can also have an inverted cube/sphere with multiple materials so you can achieve similar benefits. For a cube, you could even use 4 textures per face, making a total of 24 if setup correctly.
I am using the Sphere. The reason is simple: it is much easier to render a spheric panorama. As an example, you can render it with Panorama Camera in Blender. You don’t need to render all those cube sides manually or write a script for it.
Thanks for the info. Thinking exactly the same. Worked my head around cmftStudio and tools like https://jaxry.github.io/panorama-to-cubemap/.
And although it worked I ended up with some similar results than using the sphere. Also changing materials programmatically is a bit easier, in my opinion.