I’m trying to understand the internal PC mechanism for this behavior:
You can reproduce this live at Slither Stripe - Shaderfrog 2.0 Hybrid Graph Demo by moving the opacity slider on the blue data node.
The way this tool works:
- It takes all the graph properties (like opacity), and internally to build a throwaway PC material with all those properties set
- It forces a material compilation of its shader, by calling scene.render()
- It scrapes the generated GLSL from the PC generated
- It destroys the internal throwaway material, and keeps the GLSL.
- Then on the client / user facing component, it creates a new “live” pc.StandardMaterial(), and intercepts that material/shader’s GLSL and replaces it with the core GLSL the core system generated.
In the happy path, both the core and the userland shader are generated with the same properties. However, when changing the opacity slider, PlayCanvas clearly does something different per-frame if the material has opacity or not.
I’m still digging through PC’s guts to see what the switch here is, but figured I’d ask as well because it’s taking me longer than expected to find. What happens under the hood that causes PlayCanvas to use a separate shader(?) based on opacity? The PC material doesn’t have any other shader variations, there’s only one even though PC seems to be swapping out shader(s?) at runtime.
A related thought: For this tool to support possible multiple generated shaders, I might have to some coupling of the core of the tool to the possible variation options that can cause PC to generate multiple shaders. For example, if a shader is opaque, I would need to know that opacity generates two shader variations, and re-do the above steps proactively for both possible variations. I’m trying to avoid this kind of direct coupling to the engine plugin (PlayCanvas), so I’m trying to better understand the mechanism that can cause shader variations.