I have a scene. There is a geo-sphere, exported from 3D’s max.
I’ve iterate through it’s vertexes and indices and built a new model with ~500 mesh instances, which one of them has it’s own GraphNode. Every mesh instance is a triangle from sphere.
I need em to move to random positions, or to be on specified place. It works properly now, but I stuck on performance problem.
I think I can optimize my scene by using shader and render the whole geometry with it.
So problem is, I don’t know how to pass to the shader information about rotation and position of triangle.
There is no geometry shader, so I have to do it by vertex one?
So each object in the scene is a different draw call (probably) which is why you’ll be hitting performance issues. So yes, put it in a shader. The following images are for a single mesh, single shader, single object.
I don’t know how to pass to the shader information about rotation and position of triangle
A shader already knows where a triangle is - you just have to deform it in some way. While I haven’t done this in Playcanvas, in another engine I got this effect:
Another way would be to store extra information in vertex buffer, specifically an index of for vertices. This would allow to “group” vertices by triangles. So three vertices of one triangle would have one index.
Have indexes to start from 0 and go as many triangles you have.
Then you would have a texture, where in RGBA color you would encode some position information, you can use multiple pixels per triangle to store lets say initial position and target position. Then you will be able to read in vertex shader that information using index of vertex, and decode it, and interpolate.
This gives you deterministic approach with interpolation between two pre-defined positions.
Alternative way is you store in texture current position of vertex, and then you would in vertex shader modify it, and save it back into that texture. Then render your model and position vertices based on that texture data. This is more procedural approach, where texture is used to store the state, and logic of movement is defined by math in vertex shader.
In WebGL 2.0 you have Transform Feedback, that allows you to modify vertex buffer on GPU. Unfortunately WebGL 2.0 not yet supported everywhere. But it could be the easiest option
This is actually morphing, and we working on solution for it, just need some example and more docs on it. But basically you could do morph animation in modeling tool, and it can be used in PlayCanvas.