Advice on VR vs desktop UX

Hi, just getting familiar with PlayCanvas. Before committing to developing I am trying to tick off as many Q’s as I can so I don’t get stuck later. I have been through all the demos and tried to explore as thoroughly as I can but I’m wondering about the VR vs desktop user experience.

I would like the following to happen:

  • user hits site
  • device capability is detected
  • if it’s webXR compatible (Quest 2 in my case), the VR button is displayed (taking the user to the immersive, teleporting experience)
  • if the device doesn’t have XR capability the user is automatically given a WASD/arrow keys and a first or third person experience

This doesn’t seem to be how any of the demos are set up. For example VR lab on desktop has an awkward “click to replicate teleporting” mechanism which is pretty clunky IMHO. I wonder if there’s a reason for this that, as a newbie, I can’t see. If there isn’t and my logic is sound, has anyone seen any demos of this flow anywhere?

Your plan is sound I think, I don’t see a problem with this.

I’m experimenting with XR at the moment, and have a test project set up similarly. It’s a product configurator test, and when you hit the site, you see a normal 3d spinning model and you can orbit around / zoom in, use keyboard if needed. If XR is detected, there’s an AR button that switches to AR where other controls are used. On iOS where the XR is not available, we run time convert the loaded models / scene to USDZ, and show those in AR using built in QuickLook.

1 Like

Ooooh I like that USDZ conversion sauce! I noticed the other day that the USDZ of something I was downloading was smaller than the gltf/glb. Would be cool if the de facto standard was also the most compact :slight_smile: . Thanks for your feedback, glad to know I’m not too far off with my thinking - will keep looking for a reference implementation.