Right, but when placing the object, I noticed that you use the method of setPosition and pass the position of the result from the hit test. So couldn’t I just check when I tap if the new hit test’s position matches the old one that was used to place the object, as a way to detect that I tapped on it? Maybe not just a single position but a whole area around that point.
You still have the issue where screenToWorld is incorrect due to the change in the camera’s view matrix.
The WebXR Hit test also only works from the center of the screen, not where you tap
Now I didn’t understand a single thing lol.
You still have the issue where screenToWorld is incorrect due to the change in the camera’s view matrix.
Why is this an issue? What does it affect?
The WebXR Hit test also only works from the center of the screen, not where you tap
What? Do you mean that I could get different positions for the same spot that I placed the object if the phone is in a different place when pointing at the same spot? If this is it then it wouldn’t work indeed…
EDIT
Ok what I got from this is that basically it wouldn’t work lol T_T. Thanks.
ScreenToWorld does a projection from screen space to world space via the camera and parameters such as fov will affect the projection. From the ticket in the engine repo, it looks like the camera component parameters are not updated to match the WebXR camera view since that is based on the device camera.
What I’m saying is that you can only cast a ray from the center of the screen, not where you tap on the screen with the WebXR raycast
I’ve had a quick look at the issue and the root of all of this is that in WebXR, in immersive (AR) mode, there are no touch events being fired by the engine.
This is because by default we listen to touch events on the canvas element and that is removed/hidden by the browser when in immersive mode.
Looking to see if there is a way to get touch events via the browser in WebXR Immersive
Is this a new bug then, or is it something that was reported before? Also a little bit off topic, how do I get to display error messages as on overlay on PlayCanvas? I used to be able to use “console.error()” to debug stuff on the screen but it’s not working now.
That still works. HOWEVER, HTML DOM elements by default are not shown in WebXR sessions which is what is used to show error messages in the launch tab
Use Chrome remote debugging to get proper logs and debugging
Not a new bug nor has it been reported before. Just never found
All right thank you very much for this. Would you like me to create a bug report?
That still works. HOWEVER, HTML DOM elements by default are not shown in WebXR sessions which is what is used to show error messages in the launch tab
Ah ok thanks, that explains it. Although it would be nice if just the error messages were displayed. It’s much easier to debug with console.error showing on the screen than plugging the phone and using chrome inspect.
No, as I’m not yet convinced it’s a bug. Still investigating
It’s not a bug but the solution is not obvious.
The input system when in a WebXR session is different because there aren’t any DOM elements.
Instead, you have to use the app.xr.input system XrInput | PlayCanvas API Reference
With this, you can raycast into the world via the XrInputSource XrInputSource | PlayCanvas API Reference
I’ve made an example here from an old project: https://playcanvas.com/editor/scene/1339365
This creates dinosaurs as you scan around the room floor. The red sphere represents the pc.BoundingSphere area to raycast against. Tapping on them will make the dinosaur pulse
So the script that does the magic is the raycast.js, that I need to attach to the entity that will be spawned, that’s it? The red sphere around the dinosaur is just a debug thing, or did you configure that somewhere?
Thanks a lot by the way!
raycast.js is holding the logic yes. However its just an example. You will have to consider how to integrate the logic done there for your own app. It’s not a simple drop in and it just works.
The red sphere is on the template of the dinosaurs to show the size of the bounding sphere that is being raycasted against
I got it working on my project, except for the red sphere. I really want that one as well, I can’t find on the code where you spawn it. What script is it? I’m looking at all of them and can’t figure it out =/.
EDIT
Oh it’s the red sphere render under the dinossaur template? But does it scale with the bounding sphere size? Where’s that code located?
It’s just manually sized in that project
Made a quick sample project that has a more generic script that can be used: WebXR AR Raycasting Shapes | Learn PlayCanvas
@yaustar I have 2 questions regarding this:
1- I’d need to use this raycast scheme for UI elements on the screen as well(because it’s webxr)? Would it work? All I need to do is place a raycast script on UI buttons too?
2- Is it possible to detect finger movement with this raycast scheme? I tried to use this project as an example. It uses touch events like touch move, etc. But with raycast I don’t have that right? How could I go on to achieve this functionality? The idea is that I want to use the colliders as hotspots for the finger movement detection. I don’t want to use them just to select the object, but also to rotate/scale them. Thanks!
EDIT
I found this this.app.xr.input.on('select', this.doRayCast, this);
that seems to cast a raycast if I tap on the screen, like a pointer enter. I guess I need something like this.app.xr.input.on('selectMove', this.doRayCast, this);
, like a pointermove. Is there something like that? If not, what can I do instead?..
EDIT
it seems that it’s just not possible to do what I want regarding number 2? I read these 2 documents: Inputs and input sources - Web APIs | MDN and Inputs and input sources - Web APIs | MDN that seem to suggest that webxr only detects selection and something like a finger drag is not covered by the api?
EDIT
I also tried following this forum post (I used your last post’s project as reference) about scaling and rotating in an AR environment, but it doesn’t work ;_:. I’m not getting errors or anything though. I’m guessing it’s because the traditional input system doesn’t work in a webxr context? What do I do then? At this point I’m just trying to rotate the model without it having to be selected and without the fingers having to be touching the bounding area.
Just trying to make it rotate with touch input on the screen, but even that fails. I’m at a loss T_T.
EDIT
I also found more information here, so it seems that it’s currently not possible at all? I literally can’t detect touch input and consequently can’t do basic ar stuff with a placed model like rotating it and scaling it because the webxr api currently doesn’t allow it?
EDIT
I also found this post, but the guy is not using webxr, so I don’t think it’s relevant…
EDIT
I found this video on youtube, the guy is making the model rotate in webxr mode with touch controls, but using threejs. I am thoroughly confused now. So it is possible after all? How do I get this to work in PlayCanvas?
Thanks a lot for helping me out!
There’s a list of events on the API that you can listen to: XrInput | PlayCanvas API Reference
See ticket that you created: https://github.com/playcanvas/engine/issues/4035#issuecomment-1046115770
Either use the DOM overlays API that is mentioned in the GitHub ticket or what I was previously thinking of was to have a plane shape to raycast against that is always in front of the camera and facing the camera.
That way you can use it as a ‘virtual’ 2D screen
UI elements in screen space heavily relies on mouse/touch events from the browser so they don’t work in WebXR.
Maybe a full screen invisible DOM could be added with the DOM overlay API and use TouchDevice | PlayCanvas API Reference to get input events from that DOM element.
Just have to remember changing it back when exiting AR.
You are in a realm where a lot of this is still experimental there’s going to be a lot trial and error/looking at engine source code etc
This all seems way more complicated than it should be. Not that it’s PlayCanvas’ fault. If you could also complement this example to use the DOM overlay to do something, it would be nice, as I think it would be a more complete example.
Thank you very much for getting back to me and thanks for all your input.
It is still a draft API by the W3C so it’s still subject to change