I want to construct a 3d model to moviment when a gyroscope sensor moviment. It’s a litle bit to explain by text, so this link shows what I want to do: http://www.raspberrypi.org/real-time-orientation-display-with-the-minimu-9-v2/
I want to make a simple model like a triangle or square to moviment like the video on the link.
By default PlayCanvas can handle anything as long as there is a browser API for it. There are gamepad and orientation APIs in browsers today, so I’m not sure if they would be able to get this data.
My idea is to make a fligh control for a quadcopter, and I want to model the movements and reactions with a 3D model. I’m trying with WebGL, but it take me a long time to make a simple cube and your movements, many line codes and the language is a litle difficult to use.
With PlayCanvas will be very fast to construct the model, but I have to understand how to use the sensors with PlayCanvas. I thing it’s similar to do games for Android, because the sensors is similar and the OS is based on Linux in two cases.
I will find how to do games for Android with PlayCanvas, the process is probably similar.
Dave thanks for your help, I will se if it’s possible to use browser API to detect the movements of the sensor.
I see on PlayCanvas it’s possible to insert .js scripts, maybe it’s is a solution too.
This project does implement mouse events, not touch events. And browsers are known to “simulate” mouse events from touch events if no touch events are attached, but this simulation is not based on any official agreed specification from W3C, so different browsers might do it differently. If you need to support touch consistently, then you need to implement touch events explicitly.
Have you tried to remote debug it? What does console shows, any errors, and if API’s are available, etc?