Phone Orbit Camera

Good Morning/Day/Evening fellow developers!

Now what I am trying to do, is to make an sort of Orbit camera, but instead of the Orbit camera used in the Tutorials and examples, I want to use my Phone, as the Orbit camera. I’ve already seen, that the Device Orientation can be read and that this information is sufficent to rotate the camera. But the movement is tricky… The movement is tracked by the accelerometer and therefore acceleration in m/s^2. Now I was wondering if anyone has done or heard of someone achieving this.

What I got so far was this:

window.addEventListener("deviceorientation", function(event) {
    // for tracking orientation
        txt_orientation.findByName("alpha").element.text = "alpha:" + event.alpha.toFixed(4);
        txt_orientation.findByName("beta").element.text = "beta:" + event.beta.toFixed(4);
        txt_orientation.findByName("gamma").element.text = "gamma:" + event.gamma.toFixed(4);
        
       //Do some basic rotations
        var zRot = Math.floor(event.alpha);
        var xRot = Math.floor(event.beta) - 90;
        var yRot = Math.floor(event.gamma);
        //xRot = origCameraRot.x + xRot;
        camera.setEulerAngles(xRot, yRot, origCameraRot.z);

    }, true);

To read out the acceleration you can use:

 window.addEventListener("devicemotion", function(event) {
    // Process event.acceleration, event.accelerationIncludingGravity,
    // event.rotationRate and event.interval

        txt_Acceleration.findByName("x_Acc").element.text = "X:" + event.acceleration.x.toFixed(4);
        txt_Acceleration.findByName("y_Acc").element.text = "Y:" + event.acceleration.y.toFixed(4);
        txt_Acceleration.findByName("z_Acc").element.text = "Z:" + event.acceleration.z.toFixed(4);
        
        txt_Acceleration.findByName("alpha").element.text = "alpha:" + event.rotationRate.alpha.toFixed(4);
        txt_Acceleration.findByName("beta").element.text =  "beta:" + event.rotationRate.beta.toFixed(4);
        txt_Acceleration.findByName("gamma").element.text = "gamma" + event.rotationRate.gamma.toFixed(4);
       
    }, true);

Does anyone have an idea how to move the camera according to the acceleration as to achieve a sort of orbit camera with phone movement ? (I am aware that the rotation routine is not good or even really bad)

Kind regards.

Are you using the acceleration for the position of the camera? It’s tricky as you have to keep track of the the velocity by taking into account of the changes in acceleration. Moving the phone at a constant speed is also the same acceleration as keeping the phone still.

Yeah that would be the idea. I thought of using the event.interval which is the interval the sensor reads out data, so it’s basically the dt right ? so I thought of using this with the acceleration to get how far I’ve gone in an interval with the forumla: s = 1/2 at² . So let’s say I have an acceleration of 1m/s^2 in an interval of 16ms this would mean 0.01cm. (Probably that is very flawed to). But yeah the general idea would be exactly that. But I’ve seen, that my phones accelerometer (got an S10+) seems to jump from positive to negative all the time…

Has anyone got an idea on how to convert it semi accurately ?

You won’t be able to get it ‘exactly’ right. You can move the virtual camera when you move the actual phone but it’s near on impossible going to be 1:1. (If it was possible, we would have 6DoF on Gear VR headsets a while ago).

You will have to smooth out the accelerometer values(as the data is VERY noisy) and take into account of gravity in certain scenarios.

Edit: Also, rotating the camera introduces acceleration.

Yeah I’m not aiming for exactly right, but just something that is usable. I’m breaking my head over it right now and can’t get any decent result. The orientation with camera rotation is no problem at all, but the movement is terrible… And no one appears to have done such a thing… 8th wall have it, but I can’t access that code obviously…

8th Wall have done it by using the camera. They are using the same principal as ArCore and ArKit in calculating the camera’s position. https://medium.com/@kevalpatel2106/exploring-arcore-digging-fundamentals-of-ar-9250ea10c8fd

Hmmm Interesting… Thanks for the information! I’ll try to make some more use of the Accelerometer, even tought it’s giving me a hell of a time…

Have a Google around. It is unlikely you are the first person to try this. I think some people have even tried this with the Wiimote too.

I’ve researched the topic long now on google and it appears to be unreliable to do it like that. Hence why there are hardly any examples… The Problem is nicely described here: https://www.youtube.com/watch?v=C7JQ7Rpwn2k at minute 23:20.

Would you think, that it would be possible to do something similar with image processing ? Or any library that could be used along with playCanvas to achieve something similar ?

thanks for the exchange so far!

I think you would be best off just integrating 8th Wall into PlayCanvas vs trying to do it yourself. (Or using 8th Wall with one of the other engines like three.js).

Hmm okay. Can this be easily achieved? Thanks for the suggestion!

I had a shot at it but ran out of time before I could get anywhere. Might try again at some point.

Any information you would be keen to share ? =) might be a great help. I’ll probably try it out and would also share my findings if I get anywhere

Being tedious to test as PlayCanvas (the free tier), the app is launched with their domain which the free tier of 8th Wall doesn’t allow.

I was thinking of going engine only, at least for to develop the integration.

Check this

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////the8thWall_test//////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

const onxrloaded = function ()
{	
	XR.PlayCanvas =
	{
		app: null,
		camera: null,
		canvas: null,
		
		pipelineModule: function ()
		{
			if (!pc)
				throw new Error ("PlayCanvas engine is required for playcanvas pipeline module");
				
			const module =
			{
				name: 'playcanvas',
				onStart: function (data)
				{
					let canvas = document.getElementById ('playcanvas');
					
					let app = new pc.Application (canvas,
			        {
			            mouse: new pc.Mouse (canvas),
			            touch: ('ontouchstart' in window) ? new pc.TouchDevice (canvas) : null,
			            graphicsDeviceOptions:
			            {
			                antialias: true,
			                alpha: true,
			                preserveDrawingBuffer: false,
			                preferWebGl2: true
			            }
			        });
			        
			        app.start ();
        
			        app.setCanvasFillMode (pc.FILLMODE_FILL_WINDOW);
			        app.setCanvasResolution (pc.RESOLUTION_AUTO);
			        
			        app.autoRender = false;
			        
			        app.scene.gammaCorrection = pc.GAMMA_SRGB;
			        app.scene.toneMapping = pc.TONEMAP_FILMIC;
			        app.scene.exposure = 2;
			        app.scene.skyboxIntensity = 3;
	                
	                let camera = new pc.Entity ('Camera');
			        camera.setPosition (0, 1.5, 0);
			        camera.addComponent ('camera');
			        camera.camera.clearColor = new pc.Color (0, 0, 0, 0);
			        app.root.addChild (camera);
			        
			        XR.PlayCanvas.app = app;
			        XR.PlayCanvas.camera = camera;
			        XR.PlayCanvas.canvas = canvas;
				},
				onUpdate: function (data)
				{
					let reality = data.processCpuResult.reality;
					let camera = XR.PlayCanvas.camera;
					
					if (reality)
					{
						let matrix = pc.Mat4.ZERO.set (reality.intrinsics);
						let position = reality.position;
						let rotation = reality.rotation;
					    
					    if (matrix) camera.camera.data.camera.calculateProjection (matrix, pc.VIEW_CENTER);
					    if (position) camera.setLocalPosition (position.x, position.y, position.z);
					    if (rotation) camera.setLocalRotation (rotation.x, rotation.y, rotation.z, rotation.w);
					}
				},
				onRender: function (data)
				{
					XR.PlayCanvas.app.renderNextFrame = true;
				}
			};
			
			return module;
		}
	};
	
	const CustomModule =
	{
		pipelineModule: function ()
		{
			const module =
			{
				name: 'custom',
				onStart: function (data)
				{
					XR.XrController.updateCameraProjectionMatrix
					({
				        origin: XR.PlayCanvas.camera.getLocalPosition (),
				        facing: XR.PlayCanvas.camera.getLocalRotation (),
					});
					
					if (XR.PlayCanvas.app.touch)
			            XR.PlayCanvas.app.touch.on (pc.EVENT_TOUCHSTART, module.onPointerDown, module);
			        else
			        	XR.PlayCanvas.app.mouse.on (pc.EVENT_MOUSEDOWN, module.onPointerDown, module);
				},
				onPointerDown: function (event)
				{
					let camera = XR.PlayCanvas.camera;
			        let ix = event.touches ? event.touches[0].x : event.x;
			        let iy = event.touches ? event.touches[0].y : event.y;
		            let p0 = camera.getLocalPosition ();
		            let p1 = camera.camera.screenToWorld (ix, iy, camera.camera.nearClip);
		            
		            if (p0.y != p1.y)
		            {
		            	let x = p0.x + (p1.x - p0.x) * (0 - p0.y) / (p1.y - p0.y);
		            	let z = p0.z + (p1.z - p0.z) * (0 - p0.y) / (p1.y - p0.y);
		            	
		            	let box = new pc.Entity ('Box');
		                box.addComponent ('model', {type: 'box'});
		                box.setLocalScale (0.02, 0.3, 0.02);
		                box.setLocalPosition (x, 0.15, z);
		                XR.PlayCanvas.app.root.addChild (box);
		            }
				}
			};
			
			return module;
		}
	};
	
	XR.addCameraPipelineModules
	([
	    XR.GlTextureRenderer.pipelineModule (),     	// Draws the camera feed.
	    XR.PlayCanvas.pipelineModule (),				// Creates a PlayCanvas AR Scene.
	    XR.XrController.pipelineModule (),          	// Enables SLAM tracking.
	    XRExtras.AlmostThere.pipelineModule (),			// Detects unsupported browsers and gives hints.
	    XRExtras.FullWindowCanvas.pipelineModule (),	// Modifies the canvas to fill the window.
	    XRExtras.Loading.pipelineModule (),         	// Manages the loading screen on startup.
	    XRExtras.RuntimeError.pipelineModule (),    	// Shows an error image on runtime error.
	    CustomModule.pipelineModule (),
    ]);

	XR.run ({canvas: document.getElementById ('camera')});
};


const load = function ()
{
	XRExtras.Loading.showLoading ({onxrloaded});
};


window.onload = function ()
{
	window.XRExtras
		? load()
		: window.addEventListener('xrextrasloaded', load);
};
2 Likes

Please, let me know if you have made progress… :smile:

cdn.8thwall.com/web/xrextras/xrextras.js

Hey thank you for the suggestion! Where exactly did you integrate this code? In a Playcanvas project or in an 8th Wall project ? have you made any progress ? We’ve found the Code of 8th Wall but as expected it is obfuscated and therfore not easy to understand.

I actually got it to work with android so far. It’s not perfect, but does work. Just use the AR-Starter Kit example (Or you can make an extra canvas and draw the video feed via 8th wall and put a transparent play-Canvas Canvas on top of it), but disable the marker finding part, so it will use the video feed but always show what you’re trying to show. Then you need to download your project as @yaustar suggested and edit the index.html. Add your 8th wall API script tag in the header:

<script async src="https://apps.8thwall.com/xrweb?appKey=XXXXXXX"></script>

and make a new script tag in the body that looks a little like this:

    <script>      
        window.addEventListener('playCanvas-Loaded', function(){
            const CustomModule =
	       {
                pipelineModule: function ()
                {
                    const module =
                    {
                        name: 'custom',
                        onStart: function (data)
                        {
                            
                            XR.XrController.updateCameraProjectionMatrix
                            ({
                                origin: window.globals.currentCameraPos,
                                facing: window.globals.quat,
                            });
                            
                            XR.XrController.recenter();


                        },
                        onUpdate: function (data)
                        {
                            let reality = data.processCpuResult.reality;

                            if (reality)
                            {
                                let matrix = pc.Mat4.ZERO.set (reality.intrinsics);
                                let position = reality.position;
                                let rotation = reality.rotation;
                                
                                if (position){
                                    window.globals.currentCameraPos.x = 0.75 * position.x;
                                    window.globals.currentCameraPos.y = 0.75 * position.y;
                                    window.globals.currentCameraPos.z = 0.75 * position.z;
                                }
                                if (rotation){
                                    window.globals.quat.x = rotation.x;
                                    window.globals.quat.y = rotation.y;
                                    window.globals.quat.z = rotation.z;
                                    window.globals.quat.w = rotation.w;
                                }
                            }
                        },
                    };

                    return module;
            }
        }
        


            XR.addCameraPipelineModules([  // Add camera pipeline modules.
                XR.XrController.pipelineModule(),            // Enables SLAM tracking.
                CustomModule.pipelineModule(),
              ])

          // Open the camera and start running the camera run loop.
            XR.run({canvas: document.getElementById('application-canvas')})
        })
    </script>

I did the camera Syncronization via global variables. (Obviously this can be done differently). So view this attempt as a Proof of Concept. I also throw an event “playCanvas-loaded” when all entities and therefore the scene has been initialized which then starts the 8th wall tracking.

Okay I also have another example ready which worked on iOS. This time your PlayCanvas Project won’t need the AR-Kit. You just need to make sure to set the clear color of the Camera to transparent and enable the Setting “Transparent-Canvas” under the “Rendering” tab of the Project Settings. Once Downloaded, make sure to edit the styles.css and change the Background colors to transparent. Also under the #application-canvas add the attribute z-index and set it to 9999 (otherwise iOS will render the other canvas over and refuse to take the inputs).

Now modify the index.html of your playCanvas project, by adding the 8th wall tag to the header:

<script async src="https://apps.8thwall.com/xrweb?appKey=XXXXXXXXXX"></script>

Then in the body make a new script Tag and copy the follwing inside.

 //Add a Canvas element for the 8th wall cameraFeed to draw on (sits under the playCanvas canvas)
        var canv = document.createElement('canvas');
        canv.id = 'cameraFeedXR';
        canv.width = window.innerWidth;
        canv.height = window.innerHeight;
        document.body.appendChild(canv); // adds the canvas to the body element
        
        window.addEventListener('playCanvas-Loaded', function(){
            const CustomModule =
	       {
                pipelineModule: function ()
                {
                    const module =
                    {
                        name: 'custom',
                        onStart: function (data)
                        {
                            
                            XR.XrController.updateCameraProjectionMatrix
                            ({
                                origin: window.globals.currentCameraPos,
                                facing: window.globals.quat,
                            });
                            
                            XR.XrController.recenter();


                        },
                        onUpdate: function (data)
                        {
                            let reality = data.processCpuResult.reality;
                            //let camera = XR.PlayCanvas.camera;

                            if (reality)
                            {
                                let matrix = pc.Mat4.ZERO.set (reality.intrinsics);
                                let position = reality.position;
                                let rotation = reality.rotation;

                                /*if (matrix){
                                    console.log(matrix);
                                    window.globals.projMatrix = matrix;
                                }*/
                                
                                
                                if (position){
                                    window.globals.currentCameraPos.x = 0.75 * position.x;
                                    window.globals.currentCameraPos.y = 0.75 * position.y;
                                    window.globals.currentCameraPos.z = 0.75 * position.z;
                                    //camera.setLocalPosition (position.x, position.y, position.z);
                                }
                                if (rotation){
                                    window.globals.quat.x = rotation.x;
                                    window.globals.quat.y = rotation.y;
                                    window.globals.quat.z = rotation.z;
                                    window.globals.quat.w = rotation.w;
                                    //camera.setLocalRotation (rotation.x, rotation.y, rotation.z, rotation.w);
                                }
                            }
                        },
                    };

                    return module;
            }
        }
        
            XR.addCameraPipelineModules([  // Add camera pipeline modules.
                XR.GlTextureRenderer.pipelineModule(),       // Draws the camera feed.
                XR.XrController.pipelineModule(),            // Enables SLAM tracking.

                CustomModule.pipelineModule(),
              ])

          // Open the camera and start running the camera run loop.
            XR.run({canvas: document.getElementById('cameraFeedXR')})
        })

Now as in the other example I only start the 8th wall process once all my initializes in Playcanvas are done by throwing the event “playCanvas-Loaded”. If you just want to wait for the 8th wall stuff to be intialized wait for the event “xrloaded”. But since I sync the CamerPosition with global Variables here, I have to wait until these are initialzed, or else the updateCameraProjectionMatrix will throw an error.