Shaderfrog 2.0 - a PlayCanvas (and Babylon.js, and Three.js) Shader/Material editor demo


Hi folks, my name is Andy. A few years ago I built a tool called Shaderfrog ( which is/was a Three.js shader composer. For the last few years as a nights-and-weekends project I’ve been working on the next iteration of this tool.

Shaderfrog 2.0 is a new type of graph editor I’m calling a “hybrid graph,” that lets you write raw GLSL in nodes, and compose nodes together automatically and building the new final shader source code for you.

Believe it or not, this tool is engine agnostic, since it works with raw GLSL. It supports engines as plugins. I started off building plugins for Three.js and Babylon.js, and recently completed enough work on the PlayCanvas plugin to feel comfortable sharing it here.


:warning: Warning, this demo has plenty of bugs, and it’s easy to cause a Javascript error that requires a page refresh!

:warning: There is currently no way to export shaders! This is just a demo / early prototype.

To cut to the chase, here are some cool demos I’ve built with the editor:

Using a custom shader as the vertex displacement map in a PlayCanvas StandardMaterial() (Live)

In any of these demos, you can double click on the red or green source code nodes to edit the GLSL. For example, you can click on the red “Big Wiggles” node in the above graph to see how it generates the vertex displacement in GLSL. You can edit that GLSL, and click “compile”, to update the output.

Using a custom Voronoi shader as the normal map (Live)

Warning: This Voroni demo is pretty slow, I think because the material is double sided and doing unnecessary renders. You can do the same thing here: double click on the green Voroni node

Composing PlayCanvas materials with other shaders (Live)

The hybrid graph lets you treat any GLSL shader as a node in the graph. This demo shows how a full PlayCanvas shader, with refraction and lighting, can be arbitrarily composed with other nodes. This one is harder to explain from a technical context - to fully explain it requires explaining how most of the system works.

Using a custom shader for the diffuse map (Live)

Not as fancy of a demo, but looks cool!

More detail

You can see more shaders at the home page, including Babylons.js and Three.js shaders: Shaderfrog 2.0 The core tech that powers them is the same, and there are plugins defined for each engine to do things like define how the engine’s standard material is created.

I don’t know what the future holds for this demo. Supporting exporting is one of the biggest features I think is important before doing an official launch.

Your feedback is welcome, if you get a chance to try it out!


That is so nice and powerful! Thanks for sharing @andyandy.

1 Like

So great, I look forward to see where this will go in the future, thanks!

1 Like

Shaderblog blog post explaining the state of the editor: Introducing the Shaderfrog 2.0 Hybrid Graph Editor Alpha


Another example of shader interoperability. I took the shader/effect from a three.js forum post: Calculating vertex normals after displacement in the vertex shader - Resources - three.js forum

And ported it to PlayCanvas: Liquid Glass - Shaderfrog 2.0 Hybrid Graph Demo

Porting individual shaders isn’t too hard to do by hand for simple effects. It’s renaming uniforms/varyings, like three has a modelMatrix, PlayCanvas has matrix_model, etc, and some minor math updates, depending on which matrices are available. Most (but not all) of this can be automated, so in theory someone could write a single GLSL effect and have it automatically portable to all the major WebGL engines.

There’s one hack-ish thing in this graph to force the normal varying to override the built in PlayCanvas one. Basically the vertex noise node sets vNormalW to update the normals, but in PlayCanvas, vNormalW is set after the vertex node is called, so it “overrides” the custom effect. The hack in the graph is to override PlayCanvas’s assignment to vNormalW to basically set vNormalW to itself, so it preserves the one from the custom node. While typing this I realized I think this is possible to automate - if one effect sets a varying, monkeypatch the PlayCanvas engine shader to remove any lines that assign to that varying.

This whole process is “static monkeypatching” - modify someone else’s shader GLSL (monkeypatching PlayCanvas) and generate new GLSL (aka “static” generation, happens at compile time).

By the way, I think there’s some bug in the PlayCanvas plugin that causes slower rendering when you modify uniforms / edit the graph. I’m not sure if it’s stacking renderings or creating duplicate materials or what, I need to investigate.

Same effect in Babylon: Glass Water - Shaderfrog 2.0 Hybrid Graph Demo
And Three: Water Glass - Shaderfrog 2.0 Hybrid Graph Demo

To me this really does feel like cheating. I get the benefit of all of PlayCanvas’s effects: refraction/reflection/PBR properties, vertex animations, etc, while still being able to customize the material.

For shader authors, they can click into any node and modify the GLSL to customize effects. For artists, they can connect nodes together to make effects without having to know the GLSL. The tool isn’t there yet, but this is what I’m trying to unlock: Extremely fast shader editing. Think synthesizer preset tweaking: cycle through shaders like presets, find some you like, combine them, and enable quick prototyping and iteration.

Also, if you look at the graph for PlayCanvas, it’s really just one node doing the work, since you can put a full GLSL program in that node. In a traditional graph editor, you’re converting GLSL into nodes, so it can become visually verbose. This tool lets you arbitrarily compose GLSL code, so you can put everything in one node if you want, or break it up, kind of like a module system.