I’ve recently stumbled across the GeoAR.js implementation for AR.js:
As this looks very promising I was wondering If it would be possible to integrate a similar feature in Playcanvas. I assume they use the GPS coordination to calculate an 3d-Offset to the world origin, also updating the Camera Rotation based on Device Rotation. Would that be a starting point for such an implementation or might there be some additional tricks they use to get it to work?
I was also wondering if anyone knows about any WebXR Roadmap which might integrate this functionality in the future.
Als always, thanks for any heads up!
Hmm, I’m not sure if it’s also handling the AR part of the feature or just using GPS and sensors to work out where it is If it’s the latter, it should be okay to integrate but if it’s also doing AR logic (which it might be with AR.js), then it’s a lot harder as someone would have to port an implementation for the PlayCanvas renderer.
Thanks for your reply. I’m not sure if I understand you correctly.
Doesn’t the GPS and sensors build the AR logic there (a rudimentary SLAM implementation), wich is usually done by Marker or Image tracking for AR.js?
I actually thought about trying to reimplement this part and us it as a transformation source for a transparent PC Scene, similar to the Playcanvas’ jsartoolkit based Marker AR.
Actually it could also provide an initial offset for an WebXR scene, but I don’t think that ar.js currently uses this workflow (probably due to the small WebXR support)
But maybe I’m on the wrong track here …
Btw: Here is a demo of their gps based AR.js implementation, It’s quite quirky :
AR.js is tied directly to three.js IIRC and you can’t take the ‘data’ part of AR.js that renders objects etc.
It should be possible to separate/have a PlayCanvas port, just that no one has done that publicly yet
Ok I’ve started a project to see how far I may come:
It currently reads out your gps position, offsets it to a defined world position and calculates the offset for the world. I’m not sure If I may have to mirror one or both axis, but maybe that could be a start for further investigating such tracking. I will next integrate the device rotation and camera view to evaluate how far such a quick and dirty solution could work out.
Ok, I’ve added the device rotation and the camera view (based on the marker ar code).
To test set the lat/lang parameters at the cubes “arObject” component. Currently the rotation (based on gyro) seems to be of and I’m not sure if the calculations are correct. Any help appreciated.
Hey Hello Rechi, did you keep going with this project? i would be very interested to hear about how it is now?
no unfortunately I had no time to continue work on this.
I’m planning to cleanup my code for the simple 3dof VR (rotation sensors) and then will decide if I also test the motion sensors again. But I fear that such a solution for 6dof would be quite buggy, as other tracking systems (like 8-th wall or else) don’t rely on any motion tracking and rather do sophisticated image processing (as far as I know)