Hello guys, some day ago i saw on tv the presentation of a visor where you can plugin the new Samsung phone and use it as a virtual reality device, since playcanvas allow to create games for mobile phones i have thought that you can add a new preset camera function that allow to make a visualization 3d. Calculating the distance of objects from cam and creating the shadows images(red/green). It will be useful for pc apps too, just using the green/red lens one can experience awesome 3d effects. What do you think?
I think you saw the Samsung GearVR device. To target that device with PlayCanvas, you can set up stereoscopic rendering in your app. We already have a 360 Video Player example project that does this although it doesnāt specify any eye separation. This is because itās not 3D video so eye separation wouldnāt have any effect. Anyway, if you have a Cardboard VR device, itās worth having a play with that project.
To target VR devices like Oculus and Vive, you can use the WebVR integration scripts available on GitHub. WebVR provides low latency head tracking and also handle the fish-eye distortion effect for each eye.
Sorry not clear if there is a script or setting for vr, specifically for cardboard via mobile web. It seems that the 360 video player is just for video. How about none video scenes?
The 360 Video Player scene works for any kind of scene.
In the case of the video player weāre rendering video to a sphere. You could also render a still image.
Or you could remove the sphere entirely and the camera will work for a 3D scene.
I was playing with the 360 Video scene that Dave is talking about, and made some modifications for it to work with Google Cardboard, particularly on IOS devices.
Fell free to give it a try:
http://playcanv.as/e/p/ATuuvNEK/
The project can be forked here, in case anybody else finds it useful:
http://playcanvas.com/project/396111/overview/google-cardboard-prototype
I cannot get the phone (iPhone 6+) to respond gyroscopically. Does it work for you?
Hmm⦠youāre right. Iām testing on my iPhone 5, and Iām getting the same issue, but when I launch the project through the editor, the accelerometer rotations do work:
http://playcanvas.com/editor/scene/428762/launch
Iāll look into it tonight when I get home, Iām not sure why the published version isnāt working.
Ok, found the problem. In order to detect orientation changes, I need to add a deviceorientation
event listener. When I publish the project, the build gets saved on an Amazon server, and is delivered via an iframe to the PlayCanvas site. This, paired with the deviceorientation
listener, creates a security issue due to the same-origin policy:
Blocked attempt add device motion or orientation listener from child frame that wasn't the same security origin as the main page.
@max, do you have any suggestions to circumvent this issue?
Safari on iOS doesnāt support acceleration on insecure (non-https) or cross domain origins.
Make sure you use the https link (replace http with https: e.g. https://playcanvas.com/editor/scene/428762/launch)
And for published projects currently you need to use the āembedā link. so for a published app which has the URL: https://playcanv.as/p/123456 use https://playcanv.as/e/p/123456
Excellent! Thanks, Dave. I didnāt know you guys had the āe/ā embed shortcut.
@creg this link should work on iPhone 6:
http://playcanv.as/e/p/ATuuvNEK/
Oh the link redirects⦠Yay that worked! Thanks!
So is there a script you dropped in that handles the split view?
I see that you have a dual camera system set up in the editor. Do you know if itās possible to clone a public project? Or would I need to set it up manually as you have it?
Yes, as long as itās public, you should be able to make clones:
Visit the project overview, click āForkā, and give it a name. The project will be cloned to your projects section.
And yes, for stereoscopic view, you need to set up two cameras side by side. Hereās a quick explanation:
First, you set up two cameras so each one takes up half of the viewport
Camera Left viewport settings: x: 0, y: 0, w: 0.5, h: 1 Camera Right viewport settings: x: 0.5, y: 0, w: 0.5, h: 1
Then, you space them out a bit, to help provide the illusion of depth
Camera Left position: x: -0.1, y: 0, z: 0 Camera Right position: x: 0.1, y: 0, z: 0
I was able to get my demo working, camera-wise, but now itās stuttering along (on the iPhone 6+) and Iām not sure why:
EDIT: I had way too many spot lights in play. Once I removed redundant lights, the speed issue went away.
Thanks again for all of your help @marquizzo and @dave !
ZuiHou.prototype.update = function(dt) {
if (this.alpha && this.beta && this.gamma) {
this.entity.setEulerAngles(-90,0,0);
//åē§»
this.entity.rotate(0, 0, -this.gamma);
this.entity.rotate(this.beta, 0, 0);
this.entity.rotate(0, this.alpha, 0);
this.entity.rotateLocal(0, 0, this.orientation);
}
};
How do you know how to add touch(vr+touch)?please! like this (Please use your mobile phone to open it):http://changan.itqiche.com/api.php/WebVR/index/id/20171012191031Osrn9
my project: https://playcanvas.com/editor/scene/557880
You are best off creating a new thread about your problem as I donāt think it relates to the VR camera talk here.
Donāt forget to include a link to the project if you can.
3Q,Now the address has been added:grinning:
It looks like you are using some custom HMD VR code/polyfill so Iām assuming that the HMD that you are using has some sort of ātouchā input similar to the Google Cardboard.
These tutorials/code samples should help: https://developer.playcanvas.com/en/tutorials/?tags=input
oh,no no no
no use HMD;use āzuihouā script,only!
Even then, the same code for touch input in the tutorials/code samples applies.