Optimize scene using shaders instead of CPU

Hi.

I have a scene. There is a geo-sphere, exported from 3D’s max.
I’ve iterate through it’s vertexes and indices and built a new model with ~500 mesh instances, which one of them has it’s own GraphNode. Every mesh instance is a triangle from sphere.

I need em to move to random positions, or to be on specified place. It works properly now, but I stuck on performance problem.

I think I can optimize my scene by using shader and render the whole geometry with it.

So problem is, I don’t know how to pass to the shader information about rotation and position of triangle.
There is no geometry shader, so I have to do it by vertex one?

My scene:

And since there is a shader, don’t I need to separate the sphere anymore?

So each object in the scene is a different draw call (probably) which is why you’ll be hitting performance issues. So yes, put it in a shader. The following images are for a single mesh, single shader, single object.

I don’t know how to pass to the shader information about rotation and position of triangle

A shader already knows where a triangle is - you just have to deform it in some way. While I haven’t done this in Playcanvas, in another engine I got this effect:

Using the vertex shader:

      vec4 offset = gl_Vertex;
      offset.xyz += gl_Normal.xyz * explode;
      gl_Position = gl_ModelViewProjectionMatrix * offset;

Where explode is a uniform that starts at zero and increases.
Note that this only works for flat shaded models. Smooth shaded ones will just grow.

If you want to do smooth shaded ones you have to get the expansion vector some other way. I decided to encode it in vertex color:

Then using the vertex shader shader:

      vec4 offset = gl_Vertex;
      offset.xyz += (vec3(0.5) - gl_Color.xyz) * explode;
      gl_Position = gl_ModelViewProjectionMatrix * offset;

It looked like:

Had I triangulated the faces first, it would have looked more like your scene.

3 Likes

@sdfgeoff has a good option if it suits you.

Another way would be to store extra information in vertex buffer, specifically an index of for vertices. This would allow to “group” vertices by triangles. So three vertices of one triangle would have one index.
Have indexes to start from 0 and go as many triangles you have.

Then you would have a texture, where in RGBA color you would encode some position information, you can use multiple pixels per triangle to store lets say initial position and target position. Then you will be able to read in vertex shader that information using index of vertex, and decode it, and interpolate.
This gives you deterministic approach with interpolation between two pre-defined positions.

Alternative way is you store in texture current position of vertex, and then you would in vertex shader modify it, and save it back into that texture. Then render your model and position vertices based on that texture data. This is more procedural approach, where texture is used to store the state, and logic of movement is defined by math in vertex shader.

In WebGL 2.0 you have Transform Feedback, that allows you to modify vertex buffer on GPU. Unfortunately WebGL 2.0 not yet supported everywhere. But it could be the easiest option :slight_smile:

1 Like

Thanks for your answer, it’s extremely useful, and helpful for me.

Right now i’m trying to write this shader, so, working on!

But, how @max said, I need a target to move for, so texture method is more suitable in my case.

But… I think about vertex definition. Why I can’t pass to vertexShader position of vertex and it’s target?

Parse mesh, get vertexes, add target position, bake back and use in shader?

You can :slight_smile:
And that will avoid texture in between.

This is actually morphing, and we working on solution for it, just need some example and more docs on it. But basically you could do morph animation in modeling tool, and it can be used in PlayCanvas.

Okay, I got it.

So, now solution is under construction, not public, right?

I’ll try this hardcore vertex method firstly, and look at morph in case of fail.

Thanks a lot!

It’s in engine, as low-level layer, no component layer for it yet.
Not sure there are docs for it yet either.

Wow, how can I find it?

In engine. Here is relevant PR: https://github.com/playcanvas/engine/pull/933

1 Like

Well, I’ve done my idea about custom vertex definition and now I can set position of triangle.
It works like a charm, except one detail…

There is still indices and all my triangles are connected by common vertexes.
So, what is the best way to separate em out?

Vertex Buffer has a definition object, you would need to modify it, so it knows what is what in the vertex buffer.

This is an old thread, but we have examples that would be useful as a possible solutions here:

http://playcanvas.github.io/#graphics/hardware-instancing.html
http://playcanvas.github.io/#graphics/batching-dynamic.html
http://playcanvas.github.io/#graphics/point-cloud-simulation.html
http://playcanvas.github.io/#graphics/mesh-morph.html

2 Likes