I am trying to modify the example hit test project to make it so that instead of spawning multiple objects each time I tap on the floor, I only want one object to ever be spawned, and I want to be able to use 1 finger to rotate and 2 to scale. See example video attached
I guess I don’t know where to start. Where do I detect that I’m tapping on the model for example, etc. I just need some pointers. Thank you for your time!
For that check the picking tutorials and examples, there are several techniques presented there and depending on the expected accuracy you can chose one:
You can easily do that in your code by adding a guard close around your spawning method. Check if the entity has already been added, if that’s true then exit/return.
You can check some of the input examples on how to receive touch input. There is an event argument returned that contains the number of gestures on screen, from there you can decipher if it’s one or more fingers on it. The model viewer project template uses that too, you can check it out.
Touch.prototype.onTouchStart = function (event) {
// number of touches
if (event.touches.length === 2) {
}
}
@Leonidas Thanks. I am tyring to use the Frame Buffer Picking to no avail. The idea is that once I tap to spawn an object, if I click on the object with one finger I can rotate, and 2 to scale. But it seems that my taps are not recognized
For now, I am simply trying to make the object pulse like the original scene. But after I place the object and tap on it, nothing happens. Here’s my project.
Hey guys @Leonidas@yaustar I have another idea. Is it possible to use the hit test from webxr itself to detect if I tapped on the model? If the hit test is used to place the model, it should be able to be used to detect that I tapped on the model that I placed as well, right? So some way to use the hit test and return the entity I tapped on.
Right, but when placing the object, I noticed that you use the method of setPosition and pass the position of the result from the hit test. So couldn’t I just check when I tap if the new hit test’s position matches the old one that was used to place the object, as a way to detect that I tapped on it? Maybe not just a single position but a whole area around that point.
You still have the issue where screenToWorld is incorrect due to the change in the camera’s view matrix.
Why is this an issue? What does it affect?
The WebXR Hit test also only works from the center of the screen, not where you tap
What? Do you mean that I could get different positions for the same spot that I placed the object if the phone is in a different place when pointing at the same spot? If this is it then it wouldn’t work indeed…
EDIT
Ok what I got from this is that basically it wouldn’t work lol T_T. Thanks.
ScreenToWorld does a projection from screen space to world space via the camera and parameters such as fov will affect the projection. From the ticket in the engine repo, it looks like the camera component parameters are not updated to match the WebXR camera view since that is based on the device camera.
What I’m saying is that you can only cast a ray from the center of the screen, not where you tap on the screen with the WebXR raycast
I’ve had a quick look at the issue and the root of all of this is that in WebXR, in immersive (AR) mode, there are no touch events being fired by the engine.
This is because by default we listen to touch events on the canvas element and that is removed/hidden by the browser when in immersive mode.
Looking to see if there is a way to get touch events via the browser in WebXR Immersive
Is this a new bug then, or is it something that was reported before? Also a little bit off topic, how do I get to display error messages as on overlay on PlayCanvas? I used to be able to use “console.error()” to debug stuff on the screen but it’s not working now.
All right thank you very much for this. Would you like me to create a bug report?
That still works. HOWEVER, HTML DOM elements by default are not shown in WebXR sessions which is what is used to show error messages in the launch tab
Ah ok thanks, that explains it. Although it would be nice if just the error messages were displayed. It’s much easier to debug with console.error showing on the screen than plugging the phone and using chrome inspect.
This creates dinosaurs as you scan around the room floor. The red sphere represents the pc.BoundingSphere area to raycast against. Tapping on them will make the dinosaur pulse
So the script that does the magic is the raycast.js, that I need to attach to the entity that will be spawned, that’s it? The red sphere around the dinosaur is just a debug thing, or did you configure that somewhere?
raycast.js is holding the logic yes. However its just an example. You will have to consider how to integrate the logic done there for your own app. It’s not a simple drop in and it just works.
The red sphere is on the template of the dinosaurs to show the size of the bounding sphere that is being raycasted against
I got it working on my project, except for the red sphere. I really want that one as well, I can’t find on the code where you spawn it. What script is it? I’m looking at all of them and can’t figure it out =/.
EDIT
Oh it’s the red sphere render under the dinossaur template? But does it scale with the bounding sphere size? Where’s that code located?