Picker with VR mode

Hi,
I have an application that uses the pc.Picker class to pick objects (entities).
I use it to pick whatever object is in the center of the screen (canvasWidth/2, canvasHeigth/2).

The thing is that it works like a charm in desktop mode but it does not work in VR mode.
I have tried every possible combination:
canvasWidth/4, canvasHeigth/2
canvasWidth/2, canvasHeigth/4
canvasWidth/2, canvasHeigth/2
canvasWidth, canvasHeigth

but none works.

I use it in a mobile phone with Chrome and a Samsung Gear VR. It seems that Chrome warps the screen to match the deformation of the Samsung Gear VR lens. However, I have no idea of how to undo this transformation, if this is problem, which I am not sure. In fact, I have also tried it with an HTC Vive, where the canvas does not seem to be warped, and it doesn’t work either.

Is the pc.Picker class ready to work with the VR mode?
If not, how can I make it work?

PS: I need to use the pc.Picker to pick the object, so other methods such as a ray cast are not an option.

I don’t pc.Picker is suitable for this as effectively, there are two cameras rendering in VR. I would check the engine code to see what it is doing with the camera when it presents VR. Maybe you can get some clues there?

Otherwise, what are the reasons for using pc.Picker over raycasts/pc./Shape?

Otherwise, what are the reasons for using pc.Picker over raycasts/pc./Shape?

Because the raycast doesn’t work with 3D models that are imported and scaled programatically, and I need to select this kind of objects.

You can probably use the pc.Shape tools for this as you can resize them on the fly: https://github.com/playcanvas/engine/tree/master/src/shape

It’s what I used for WebVR Labs and there’s a smaller example here: https://developer.playcanvas.com/en/tutorials/entity-picking-without-physics/

Not a perfect solution but may help?

I am building an AR authoring tool (plus the corresponding player), so that’s not a solution. I need to use the pc.picker and I need to solve the problem with the VR mode…

Can you explain why it’s not a solution? If physics raycasts were a possibility but we’re only rejected due to scaling issues, pc.Shape should be a substitute possibility.

If you really need pc.Picker, then you have to start digging into the engine code to see if it’s even a possibility.

Because the user will insert 3D models into the application in real time. It is an authoring tool. The user will be the editor of a 3D scene that will be created at run-time. The 3D scene will not be preloaded.
I cannot tell users: “don’t use 3D models because if you insert a 3D model and you scale it, the application won’t work”.

But you can scale the size of the pc.Shapes at runtime as I mentioned before. So I don’t see the issue?

E.g https://playcanvas.com/editor/scene/743851 (press 1, 2, 3 to change the size of the box)

Looking on the pc.Picker side, it uses a color buffer to work out what it has picked so at a guess, the VR render color buffer will look completely different (see top left here). The pixel position of where you pick in the color buffer will be completely different.

Special-HD-Google-Cardboard-virtual-phone-3D-glasses-now-STORM-Mirror-Google-carton-DIY

You might be able to use a second camera that is not presenting in VR instead and turn it on only for picking objects. It be slow though as you are now rendering a few frames extra.

The issue is that the users want to use 3D models. They don’t want to use simple shapes.
The fact that 3D models can’t be scale up/down properly because the colliders are not recalculated is an issue that the PlayCanvas engine should fix.

So the issue is that you want mesh colliders over primitives? pc.Shapes aren’t renderable shapes, they are effectively just bounding boxes. If you are able to get away with primitive colliders then you can still use pc.Shapes as seen in this updated project.

https://playcanvas.com/project/612778/overview/scaling-up

Ammo.js doesn’t support runtime scaling up and down of mesh colliders. The way that the editor gets around this for mesh colliders is to destroy and re-add the collider at runtime. (Which is something you could after the user has finished scaling).

(You can see it being done here and here: https://github.com/playcanvas/engine/blob/1f0908774780fa21442a7ada1a12ef99437a87f9/src/framework/components/collision/component.js#L188
https://github.com/playcanvas/engine/blob/1f0908774780fa21442a7ada1a12ef99437a87f9/src/framework/components/collision/component.js#L153)

Here’s a project that does that:
https://playcanvas.com/editor/scene/743871 (Press 1, 2, 3 to change sizes)

Apr-12-2019%2017-16-57

I could try that. Although it implies to change my object selection system completely.
I would prefer to make the pc.picker work in VR. It shouldn’t be that hard, right?

As mentioned above, pc.Picker uses the rendered color buffer to work out was picked. The tricky part is that in VR, the canvas is completely different with both eyes rendered on the same canvas (that’s my assumption). How you remap the pixel screen position from a typical canvas (let’s say the center of the screen) to a VR one IS the tricky part on top of working out which eye to use as the base for the picker.

Alternatively, you could try the second camera approach mentioned above to bypass all of this at the cost of performance.

Edit: If you do figure a way to get it working in VR, that would be great of course. I just don’t think it is as easy as you think it is.