Frame Buff Picking and Post Processing

Hi,

Having a problem with Frame Buffer Picking and Post Processing. Everything was working great using the picker in our application, but when I implemented a post processing effect the picker just stopped working. Mouse and touch events work, but nothing is returned from the picker.

Is there a way to make this work or do I need to switch to collision picking? Layers maybe?

Hi @jhinrichs,

Check this engine example that implements a picker together with post effects. Maybe something in your scene/layers setup is breaking this?

Check the order in which the example implements picking:
https://playcanvas.github.io/#/graphics/area-picker

2 Likes

Leo,

Thanks I’ll check this out.

I’ve been looking at this for a few hours today, still not having much luck. The example you sent using the bloom post process has layers that look like:

image

Our application has using the bloom postProcess (for consistency, all the post processing effects I’ve tried have the same results) layers look like this:

image

Looks identical, at least from an order/enabled standpoint except for the addition of a PostEffectQueue in our example. Not sure why your example doesn’t have this layer? Also, not sure if this is a red herring. Does this seem like where the problem might lie?

I’m not sure what the PostEffectQueue layer is … it’s not something the engine uses currently I believe - is this something created on your side?

Try to set this to true: app.scene.layers.logRenderActions = true;
and use debug engine when testing. This should print out the order in which the cameras / layers are rendered, and also where post processing runs.

Try to compare those between engine example and your project. Maybe post them here.

One other thing that could give you some idea is to run engine in debug mode again, and capture a frame using Spector JS (chrome plugin for example) … this will show you exactly what gets rendered.

Sounds good, I’ll give that a shot next.

This app has been growing for the past couple years with various developers adding to it, so it’s possible deep in the app there is already some support for postprocessing layers. I’ll see if I can dig into that too.

This layer used to be added by the engine, in older versions, when post effects where added to a camera.

Leo, Mv,

Yeah just noticed that. We are still using version PlayCanvas Engine v1.36.1 revision 007d3f0. When I replaced that with the latest playcanvas it all worked.

If we can, I’ll recommend we update to the latest pc engine, otherwise I might have to see if I can find older versions of the postprocessing scripts

2 Likes

There were some picker issues with post processing that I was fixing few months ago … I’m not entirely sure it worked before at all. I only had to touch the engine to fix it, there were no changes to the post effect scripts themselves, so getting older versions won’t help.

2 Likes

Thanks, one more question though. In the code below I am trying to add a mesh (basically a background image on a plane in the background) to a new layer and exclude it from the postProcessing. When I add the line

camera.camera.disablePostEffectsLayer = this.bg_layer.id;

The new layer no longer is affected by the postProcessing, but the world layer no longer renders. I’m not sure what I am doing wrong here.

  this.bg_layer = new pc.Layer({ name: "Background Layer" });
  const worldLayer = visualizer.app.scene.layers.getLayerByName("World");
  const idx = visualizer.app.scene.layers.getTransparentIndex(worldLayer);
  visualizer.app.scene.layers.insert(this.bg_layer, idx + 1);

  //create background material
  bg_image = new Image();
  bg_mat = new pc.StandardMaterial();
  bg_image.crossOrigin = "anonymous";
  bg_image.onload = function() {
    var bg_diffuse = new pc.Texture(visualizer.app.graphicsDevice);
    bg_diffuse.setSource(bg_image);
    bg_mat.diffuseMap = bg_diffuse;
    bg_mat.shininess = 0;
    bg_mat.update();
  };
  bg_image.src = "images/overlay_in.png";

  //add background object
  let bg_entity = new pc.Entity();
  bg_entity.addComponent("model", {
    type: 'box',
    layers: [this.bg_layer.id] //put background on it's own layer
  });
  bg_entity.model.material = bg_mat;
  bg_entity.setLocalScale(85.0, 57.6, 0.001);
  bg_entity.setLocalEulerAngles(0, 0, 0);
  bg_entity.setLocalPosition(0, 0, -70);
  camera.addChild(bg_entity);

  camera.camera.clearColorBuffer = true;
  camera.camera.clearDepthBuffer = true;
  camera.camera.layers = [pc.LAYERID_WORLD, this.bg_layer.id, pc.LAYERID_DEPTH, pc.LAYERID_IMMEDIATE, pc.LAYERID_UI ];
  camera.camera.disablePostEffectsLayer = this.bg_layer.id;

  visualizer.cameraEntity.script.brightnessContrast.brightness = 0.5;

Nothing obvious in your code. You add the layer after the world, so that is fine.

I’d suggest to try app.scene.layers.logRenderActions = true; as mentioned before, and also capture the frame with SpectorJS to see what is happening.

I added that logRenderActons line, nothing showing up in console. This project is all engine btw, not using the editor at all. I’m not sure it would help me anyway because (see below) it all appears to be rendering and, looking at some example output from that from other posts, I don’t know that I would be able to decipher it anyway.


so yeah, it all appears to be rendering…

I moved the new entity’s position over to the right and the world is rendering, its just rendering BEHIND the entity that lives in the new layer? Seems like everything is fine but the new layer I created is covering up everything in the world layer?

P.S if I comment out…

//camera.camera.disablePostEffectsLayer = this.bg_layer.id;

… the layering goes back to normal and the world appears to be on top of the new layer as expected.

Do you know why setting the disablePostEffectsLayer would cause the new layer to appear on top of everything else?

This requires debug version of the engine. Also, this has been added maybe 6 months ago, not sure it is in the older engine you use (are you debugging that one or a new one?)

Layers dictate order of rendering. But overwrite is controlled by the depth buffer. If you render something later that is closer to the camera, it overwrites whatever was there before. Does that make sense?

Yes. Stuff that is rendered before postprocessing is using the same depth buffer rendering to a texture, so the overwrite works as I mentioned. But stuff after post processing is going to framebuffer, and does not have any depth in the depth buffer - you cannot depends on depth there.

Eh, ok.

I wonder if I am approaching this wrong. All I want to do is have an interaction with a static background. The entities which appear on top of the static background can have post processing applied. The static image in the background would be unaffected by post processing.

The easy answer would be to put the image in the background with html and a transparent canvas. The problem is we want to allow the user to screen capture the canvas, including the static background.

Is there another approach that might allow this functionality?

What kind of post processing are you talking about here? Things like a bloom which leaks outside of the objects (and over your background), or not at all?

Search for “bloom” on the forum here … we had few similar discussions and possible solutions already.

we are doing brightness contrast and possibly hue saturation.

If the postprocessing isn’t going to work, might be able to fake it by modifying all the materials. Not sure on contrast though, would have to modify all the textures maybe?

Modification of materials might be easier and faster at runtime as well (mobile).

Fragment shaders run this as the last bit:

so you could replace this chunk on your materials, and apply your modification at the end of it by writing to gl_FragColor.rgb.

Try to add something like

gl_FragColor.rgb = gl_FragColor.rgb * 2.0;

it might not compile, you might need to use some temporary variable, but I’m sure you can get it to work.

There should be no need to modify texture, you can do contrast / brightness / hue right here.

1 Like