How can I have a fog effect like BMW i8 - PLAYCANVAS ( BMW car example ) ?
How to make the melting effect between the sky and the floor as in the example ?
Thanks in advance
How can I have a fog effect like BMW i8 - PLAYCANVAS ( BMW car example ) ?
How to make the melting effect between the sky and the floor as in the example ?
Thanks in advance
It’s a skybox with the mip level set to something above 1 (it actually isn’t fog). Example project: https://playcanvas.com/editor/scene/549982
Thanks, I need a good skybox!
The skybox in the example project is part of the assets library with PlayCanvas
I applied mipmap to “blur” environment but how to “mix” the plane with it ? it looks like around car there is kind of mask, how can that be ahieved ?
oh i got it, but i think mask its inverted, white on visible plan, black to fadeout into cubemap, gonna try something like that, thank you
Oh, the black is just the fake car shadow
It sounds like you’re looking for a ground fog effect (also known as segmented fog) where the fog only appears between certain world y-values.
Here is a post-effect shader that will achieve ground fog below a certain y-value. For your desired result you can add “if” statements for your desired world-height values between which you want fog to appear. Note that because it’s a post-effect it is rendered on a quad, so we cannot access world position of fragments in the shader directly; we need to reconstruct the world position using the inverse view projection matrix and the depth.
Here’s the frag shader code
vec3 getWorldPosition(vec2 uv, float depth) {
depth * uFarClip;
mat4 ivp = uInverseViewProjectionMatrix; // passed in from CPU
vec4 temp = ivp * vec4(uv * 2.0 - 1.0, depth, 1.0);
return temp.xyz / temp.w;
}
void main() {
// float depth = getLinearScreenDepth(vUv0) * camera_params.x; // gives depth 0-1
vec3 color = texture2D(uColorBuffer, vUv0).rgb; // passthru
vec3 fog_color = vec3(1.0,1.0,1.0); // white fog
float depth_nonlinear = texture2D(uSceneDepthMap, vUv0).r;
vec3 world_pos = getWorldPosition(vUv0, depth_nonlinear);
if (world_pos.y < 0.0){
gl_FragColor = vec4(mix(color,fog_color,-world_pos.y*0.005),1.0);
} else {
gl_FragColor = vec4(color,1.0);
}
}
and for completeness here is my vert shader
attribute vec3 aPosition;
varying vec2 vUv0;
void main(void)
{
vec2 vPosition = aPosition.xy;
gl_Position = vec4(vPosition, 0.0, 1.0);
vUv0 = (vPosition.xy + 1.0) * 0.5;
}
and javascript to init the shader
pc.extend(pc, function () {
// Constructor - Creates an instance of our post effect
// uses the playcanvas built in shaders with depth value and combines with our custom vert and frag shader,
// our custom vert/frag are passed in during construction of this defined shader
var GroundFog = function (graphicsDevice, vs, fs) {
// this is the shader definition for our effect
const vertex = `#define VERTEXSHADER\nprecision highp float;\n` + pc.shaderChunks.screenDepthPS + vs;
const fragment = `#define FRAGMENTSHADER\nprecision highp float;\n` + pc.shaderChunks.screenDepthPS + fs;
const shader = pc.createShaderFromCode(pc.app.graphicsDevice, vertex, fragment, 'FogShader');
this.shader = shader;
};
// Our effect must derive from pc.PostEffect
GroundFog = pc.inherits(GroundFog, pc.PostEffect);
// Frame to connect the posteffect to the variables in the shader
GroundFog.prototype = pc.extend(GroundFog.prototype, {
// Every post effect must implement the render method which
// sets any parameters that the shader might require and
// also renders the effect on the screen
render: function (inputTarget, outputTarget, rect) {
var device = this.device;
var scope = device.scope;
// Set the input render target to the shader. This is the image rendered from our camera
scope.resolve("uColorBuffer").setValue(inputTarget.colorBuffer);
scope.resolve("uFarClip").setValue(Camera.main.farClip);
// Calculate the inverse view-projection matrix
inverseViewProjectionMatrix = getInverseViewProjectionMatrix(Camera.main.viewMatrix, Camera.main.projectionMatrix);
scope.resolve("uInverseViewProjectionMatrix").setValue(inverseViewProjectionMatrix.data);
// Draw a full screen quad on the output target. In this case the output target is the screen.
// Drawing a full screen quad will run the shader that we defined above
pc.drawFullscreenQuad(device, outputTarget, this.vertexBuffer, this.shader, rect);
}
});
return {
GroundFog : GroundFog
};
}());
//--------------- SCRIPT DEFINITION------------------------//
var GroundFogShader = pc.createScript('groundFogShader');
// initialize code called once per entity
GroundFogShader.prototype.initialize = function() {
console.log('hi')
Camera.main.requestSceneDepthMap(true);
pc.Tracing.set(pc.TRACEID_RENDER_FRAME, true);
pc.Tracing.set(pc.TRACEID_RENDER_PASS, true);
pc.Tracing.set(pc.TRACEID_RENDER_PASS_DETAIL, true);
var effect = new pc.GroundFog(
this.app.graphicsDevice,
assets.shaders.outlineToonVert.resource,
assets.shaders.outlineToonFrag.resource
);
// add the effect to the camera's postEffects queue
var queue = this.entity.camera.postEffects;
queue.addEffect(effect);
// when the script is enabled add our effect to the camera's postEffects queue
this.on('enable', function () {
queue.addEffect(effect, false);
});
// when the script is disabled remove our effect from the camera's postEffects queue
this.on('disable', function () {
queue.removeEffect(effect);
});
};
function getInverseViewProjectionMatrix(viewMatrix, projectionMatrix) {
// Combine the view and projection matrices
var viewProjectionMatrix = new pc.Mat4();
viewProjectionMatrix.mul2(projectionMatrix, viewMatrix);
// Invert the view-projection matrix
var inverseViewProjectionMatrix = new pc.Mat4();
inverseViewProjectionMatrix.copy(viewProjectionMatrix).invert();
return inverseViewProjectionMatrix;
}
special thanks to @fad on Shadertoy discord for helping me with the matrix math and the un-linearized depth (which apparently still goes from values 0-1 but in a non linear fashion).