WebXR immersive-ar feature handling rationale


I started experimenting with Playcanvas and WebXR immersive-ar module, and ran in some questions:

  • why is the requested-feature of navigator.xr.requestSession options not available for pc.XrManager.start?

  • is there any advantage to force optional features, instead leaving the decision which feature to use to the developer? At least light-estimation, hit-test and depth-sensing are hardcoded here.

No complaint, just wondering.

Hi Michael.

It is two sided coin. Benefit of this approach, is that developer does not need to think about providing details for API enabling, then can just check if feature is available, then can use. If startXr method would accept flags for enabling features - then developer will have to explicitly enable feature before checking if it is available based on hardware availability.

The negative of it, is that providing optionalFeature to session, will lead to UAs enabling some internal features and it actually will consume CPU/GPU behind the scenes. So there is a cost enabling it, but not using it.

This one comes from stuff mentioned above, additionally, I believe it is not a good path to use requiredFeatures, as it will lead to inability to start session when some feature is not available.
Here is an example if requiredFeatures would be a path: developer would provide some API, if it is not available, it will fail to start session, developer needs to subscribe to such case, and then decide what to do, if want to progressively use session without that feature, then it will have to try starting session again, but that only can happen with user intent (click, touch, etc). So UX will be bad: click, show message something is not available, and click again?

While optionalFeatures path - is way more UX friendly: start session, check if feature is available - use it, if not, decide what to do, either end session or do app logic without some feature.

Currently most WebXR features are actually experimental, and their coresponding API Specs are not final. They are subject to change. As WebXR is still not finalized, we decided to just enable stuff for developers to experiment with. It is likely in the future we will add flags object to session, so developer will be providing flags to enable optional features.

Current goals of integrations:

  1. Make sure developer can access WebXR features with least coding as possible.
  2. Provide close integration with engine and WebXR freatures, so developer does not need to learn and interpret internals of WebXR specs.
  3. Currently due to specs are not final, many of those APIs are subject to change, and they likely to have breaking changes, untill WebXR specs are more stable.

Hello moka.

Thanks for your detailed answer.

I think it is great to have sensible defaults, but in my opinion there should be a way to set things specifically, where these defaults don’t fit.

It is great when a framework helps to get started with a project quickly without the need to care about all the nitty gritty details, but I don’t think a framework should restrict what a developer can do with a feature like AR. I would appreciate it when the current handling could change in the future.

I absolutely agree with that we should not restrict features that are provided by underlying APIs and specs. And as far as I’m aware, current integrations (except Illumination Estimation) are not limiting, and provide full coverage of AR/VR features. Also we do expose all native handles to underlying APIs for raw direct access.

Could you please explain what you are trying to achieve, and what is not possible with current implementation?

Good to know :+1:

One point you actually mention yourself. Not requesting features that aren’t used is a good idea to reduce resource consumption.

The other one is, that I actually want the creation of the AR session to fail when the required features aren’t available, because there is no fallback possible. Like dom-overlay is required, because user guidance is needed during the AR experience.

I am aware that there are functions to check for feature availability, but my feeling is, that they need a running session for the checks. When a required feature is missing, I would start a session only to turn it down again.

Probably not that big of a deal. But why not use such a feature when the platform (WebXR) offers it?

Here how it is done now, clear developer choice to end session or do a fallback:

app.xr.start(camera, pc.XRTYPE_VR, pc.XRSPACE_LOCAL, {
    callback: function(error) {
        if (error) return;

        // if DOM Overlay is not available, end session
        if (! app.xr.domOverlay.available) {
            // or do a fallback, no restart of session is required

And here is an example if requiredFeatures would be implemented, and it would throw exception in promise, so it will lead to error callback.
BUT: there can be many different reasons for error, and developer would need to tackle each error case by case, also, if there are many potential reasons for failed start of session, you will only know of the first one:

app.xr.start(camera, pc.XRTYPE_VR, pc.XRSPACE_LOCAL, {
    callback: function(error) {
        if (error) {
            // some error happened, but it can be many reasons
            // also fallback is not possible without restarting of session

This looks bad design to me, as error object would need to contain a lot of information, based on failure, also it does not provide enough information if there would be another error if first reason is eliminated. Basically, no way to say “These 5 required features are not available”. So if fallback is something developer will want to implement, but will only remove one missing feature, it can fail due to another missing feature. And session restart is only by user intent - so click, fail, click, fail - that is basically awfull UX.

Please, provide an example of what you are trying to achieve, and it is not possible with a current API implementation, but is possible with raw WebXR.


I see. You know things better and insist this is the way it has to be. I have seen this attitude at other places about Playcanvas already. Pity.

Good thing I wanted to switch to local development anyway. Because apart from Playcanvas thinking, there are many reasons not to use the online code editor. So, everybody is happy I guess.

@michaelvogt Am I right in saying that you are looking for an API to check for supported features before calling startXr so that the scene/app can adapt to the devices capabilities ahead of time? Eg moving the camera to a different position/angle, showing particular UIs?

While we aren’t resistant to change (see PRs about changing hard coded XR features #2381 and #2397), we have to be careful about about exposing new APIs as once we have released it, its very difficult to pull it back :sweat_smile:

We do our very best to keep backwards compatibility and therefore still supporting old APIs for physics raycasts 4 years later.

Everything we add (both internally and externally) comes under the same scrutiny of understanding what are the use cases, what benefits can it bring if added and what changes for external users.

Edit: On a side note, we have a node.js based tool that can push/pull code to and from the PlayCanvas project so you can use a local code editor: https://github.com/playcanvas/playcanvas-sync :slight_smile:


Hello yaustar.

Thank you for the link to the playcanvas sync. I have already found it and think it is an amazing way of integrating local development. Makes Playcanvas very attractive to use.

I don’t have a special use case in mind, just wanted to understand why the feature handling is done the way it is done. I see the aim to make it as easy as possible, but for me it feels unnecessary complicated and actually takes control away from the app programmer.

Yes, I understand the problematics of framework development. Been there, done that. Backwards compatibility is important.

My comments never intented to request a change of the implementation. Maybe an addition of a way to control the defaults

  • requestFeatures option not available
  • unnecessary features requested without a way to change it by the app programmer

As I said, no big deal. I just make the changes in my fork and recommend this to the people I work with.

All good.

1 Like

Thank you for your feedback. It is valuable.

This will be changed in the future, when WebXR APIs will move from experimental to production ready. Currentl focus is to help community and to help spread of WebXR technology and encourage experimentation. I would still ask for an example of what you are trying to achieve and is not possible with current API, in order to see your use case that I might have missed.

You also can use custom build / modified engine in Editor if you want. Not sure I’ve managed to see relationship between WebXR features and Editor use or not use.

If you feel there is a need of missing features, that block you from development, feel free to make a PR to the engine, following design guidelines and principles of open source project.

You’re still asking the same question, without realizing that this isn’t the point. I’m simply asking to make requiredFeature option available to the programmer.

Your attitude came across very well. Why would I prepare a pull request for it, only to be faced with the same annoying remarks again? You clearly think you know better, and force this to everybody else.

I got the answers to my questions, so no further need to discuss.

Just wondering, if you’re actually planning to make AR projects possible again? It’s broken for me since 2 weeks.

Uncaught DOMException: Failed to execute ‘getDepthInformation’ on ‘XRFrame’: Depth sensing feature is not supported by the session.
at XrDepthSensing.update (https://code.playcanvas.com/playcanvas-stable.dbg.js:40492:29)
at XrManager.update (https://code.playcanvas.com/playcanvas-stable.dbg.js:40872:56)

We haven’t yet added the origin trails needed for Depth sensing: https://github.com/playcanvas/editor/issues/237

The other WebXR AR projects that I’ve tested in the tutorials section still work for me on Chrome.

It’s possible I’m on a device that doesn’t have a depth sensor? What device are you using?

Edit: If you need a trial token, it looks like it could be added to the project directly: https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/developer-guide.md#16-can-i-provide-tokens-by-running-script

Hello yaustar,

thanks for your response.

I hope you understand that the problem here is not how I can get Playcanvas running, I just compiled my own version that works.

The problem here is, that Google broke Playcanvas AR features because they flipped the switch for depth sensing. As Playcanvas expects this feature to be available in, it breaks.

And this shows perfectly well my point, that it is a really bad practice to force defaults to developers and don’t give them a change to change them.

Sorry, I’m a bit confused as the PlayCanvas projects in the tutorials section still work for me with the latest version of Chrome on Android and the current release of the PlayCanvas Engine (1.39.4 I think). I don’t see the crash you are getting?

Eg: https://playcanvas.com/project/739875/overview/webxr-ar-image-tracking

Would it be possible to share a project that shows the crash please? Or is this with Chrome Canary/Beta?

As a reminder: WebXR Depth Sensing - is a Draft Spec, read here: https://immersive-web.github.io/depth-sensing/
It means it is not production ready, and should not be used in any production as stated by W3C. It is subject to changes without any guarantees to work.

This means, the API is constantly modified against evolving Specs, and UAs can change it at will, without obligations to make it compatible. Especially since it is not available without chrome://flags WebXR Incubations flag turned on. So if your project suddenly broke, make sure you understand the reasoning for that, either it is due to your code changes, or engine changes, or UA changes.

Also, if your project broke, please share the replication URL, it will help us replicate an issue and work on a fix if it is under our abilities. Currently, I’ve ran WebXR AR Depth Sensing demos in Android Chrome 89, with Incubations flag on, and it does not crash with errors. So this still works: https://playcanv.as/p/UN0z1XE2/

Spec for the WebXR Depth Sensing is changing currently by the W3C team, you can check on progress here: https://github.com/immersive-web/depth-sensing
UAs are updating their latest versions too, so Canary (Chrome 91), is already implemented latest Draft Spec, and PR to PlayCanvas engine that reflects those changes is coming, but during implementation I had concerns with Specs changes, and started discussion on W3C repo regarding spec.


Just to add to this, the stable version of the engine should always targeting the production releases of Chrome as that is what users will be running on and our primary focus on engine releases.

Thanks to Max, work is being done to adopt to the upcoming changes down the pipeline on Chrome releases.

1 Like

Let’s see:

I start a project from the editor, which is part of your tutorials. And when this shows an exception, that it is the users fault? Sorry, but something is completely wrong here.

But it seems Moka finally understand the Problem:

It means it is not production ready, and should not be used in any production as stated by W3C. It is subject to changes without any guarantees to work.

Yes, finally, which is what I’m saying all the time. It is ridiculously stpd to hardcode any feature that can break in a way, that it can’t be overwritten by an app developer.

With this, I think, all is said about this topic.

That’s not what I said, I don’t know where you getting that from. I can’t reproduce the crash locally with the production engine and browsers on Android and have been asking questions on how you managed to encounter the crash and which browsers this was on.

This is so that we can look at fixing these issues as soon as possible.