I am trying to get the world position of vertices in order to generate ground fog. However, I have tried multiple ways to calculate the position of the vertices by reconstruction with no luck, either from depth or by matrix projections both in the vertex and fragment shaders, such as described here.
When the vertex positions are fed into gl_FragColor, the resulting image should appear to be splitting the world into four quadrants, which do not shift depending on camera rotation or position, such as in this image:
I was looking at that example and have no idea how to port it to my own project to use with my assets, since it seems to be very tailored to that specific environment.
Ah that makes a big difference. You need to enable on the camera to render depth texture. And then in your shader, you’d need to reproject the depth back into world space. Not super trivial.
// current depth is in linear space, convert it to non-linear space
float linearDepth = getLinearScreenDepth(uv0);
float depth = delinearizeDepth(linearDepth);
and then this bit converts it to world position:
// Transform NDC to world space of the current frame
vec4 worldPosition = matrix_viewProjectionInverse * ndc;
worldPosition /= worldPosition.w;
I am using engine v1, but I was doing something similar. I believe it is an issue with retrieving the view matrix of the camera, the transforms don’t seem to line up with the actual movement of the camera. I am using the first person character controller and am using a script attached to the camera in the character controller template. I am retrieving the view matrix like this:
I used Spector JS and was able to find a built in view projection matrix and used that instead of the ones passed in by the javascript portion of the script and got my effect to work