As you noticed, I’m working on rendering voxel on PlayCanvas. As a next step, I want to add texture to voxels. And I’m hoping utilizing PlayCanvas PBR. I would like to ask engine developer’s advice how should I apply PBR to my mesh.
My current program uses “greedy” voxel meshing algorithm to reduce the number of vertex and faces. (Without greedy meshing, the rendering becomes very slow) Below is the explanation about the behavior difference between “greedy” mesher and “naive” mesher. The “greedy” mesher creates PlayCanvas model for entire voxel space.
Then, since I can only apply one material to one PlayCanvas model, it makes sense to apply Texture Atlas like below.
If I want to apply texture “h” to greedy meshed 2x3 voxel face.What I have to do is repeating “h” texture atlas block horizontally twice and vertically three times. This is beyond normal UV offset and UV scale feature and can’t be handled default PBR shader.
I read /src/graphics/program-lib/phong.js and I thought what I have to do is modifying _addMap function to repeat a part of the texture atlas and add vertex attribute to specify how many times should the shader repeat the specified texture atlas tile for the face generated by greedy mesher.
Since I’m very new to PlayCanvas shader system, I would like to receive some advice from engine developers or shader savvy. Thanks in advance!
Also, the texture mapping for greedy mesh is very well explained in this page. Please refer this page for better understanding about my issue.
I’m guessing filtering is dependent on the visual style of the game. But mipmapping is always desirable, no? Otherwise, distant voxels will shimmer badly.
The question is more about WebGL in general.
Basically, if proper filtering/mips are desired, it’s harder, the only way I know involves texture2DGradEXT to fix gradients after repeating UVs in the fragment shader (without it, you’ll get terrible seams between tiles). So it requires EXT_shader_texture_lod.
If you don’t need filter/mips, just carelessly repeat it, like texture2D(tex, fract(uv)) without any extensions.
Other way is sadly to use separate tileable textures instead of the atlas. Split your model into separate meshes (1 mesh per texture) and tile normally.
I can only apply one material to one PlayCanvas model,
Not exactly. On the engine level model consists of meshInstances which are pairs of mesh and a material. So in case you’re going to split the model, I would use 1 model with N meshInstances, where N = number of different materials.
My impression is using multiple materials and meshInstances generally more flexible and easier to implement. By doing that, I will be able to use filters and mipmap as usual. Also I will become able to apply different shader to voxels that share the same material/texture by using PlayCanvas shader chunk API. (for example, apply water and fire shader to certain voxels, animate vertex of tree leaves voxel, etc…)
So my current preference is experiment with multiple shaders and meshes option. How much performance drop do you expect from using multiple meshes and materials? (Such as texture loading, rendering, etc…) If you don’t expect significant performance drop, I will experiment this multiple material/mesh option.
By the way, is the way PlayCanvas let users custom fragment and vertex shader for a certain material similar to Three.js ShaderMaterial? There are some Japanese Three.js ShaderMaterial examples and I’m hoping that I can reuse some elements of those tutorial.
You can implement a Vertex and a Fragment shader in PlayCanvas completely custom if you like and your linked article seemed to have a solution for the texturing by using the whole Texture repetition thing and fractional vertex positions - that code looked like it would be possible to implement in PC.
Every new material / mesh is a draw call so you are going to get a performance hit if you use multiple materials or multiple textures. In a simple world you could group by materials and ensure only 1 draw call per material but you’ll run into the limit of the number of vertices per material in any world that is non-trivial - causing you to create multiple meshes by material, this is tending towards the same problem of lots of state changes and no chance of frustum culling either.
Really a Model is a convenient collection of associated meshes - maintaining a low number of models doesn’t help much if you have lots of meshes within them. Then of course there’s the number of triangles which you are already optimising with the greedy algorithm.
If this was to run on mobile you’d be massively trying to reduce the number of draw calls/state changes/material switches in my experience - but the trade off is 4x the memory being used and a more complex Vertex Shader that is doing 4x the texture sampling work and a weighted average. With a low number of textures to apply the multiple mesh approach will be more performant, but as the number of textures rises I’d imagine that all of those switches would cause a problem.
Another thing you could do is have multiple textures in one shader and then use something like the vertex colour or another vertex attribute to switch between textures - not sure if there is a limit on the number of 2d samplers or if the switching of 2d samplers would cause a pipeline glitch - I’d have to leave that to an expert or some experimentation!
I haven’t looked at your code, but I’m presuming you are using vertex colours at the moment. Mixing vertex colour tinting with textures can create a wide array of possibilities without the need for so many textures.
My temptation would be to go with the technique in the article you linked and add vertex colour tinting.
It seems the number of draw call has high impact on the performance. Other Minecraft like games also seems to use Texture atlas to optimize the performance. I will further investigate from the simplest fragment shader that let me repeat diffuse texture without interpolation.
The introduced 4-tap technique requires custom mipmap generation and it seems deep into the engine. I still don’t know how to implement that in PlayCanvas. So the study will take long. I will also check the effect of texture2DGradEXT.
Now I’m working on fragment shader so that I can implement a simple “repeat” shader based on voxel type.
My question is “Is vertex color and textured material mutually exclusive in PlayCanvas in default shader chunk?”. Let me explain the question context below.
To implement simple repeat shader, I tried to pass these values to my fragment shader. Then use fract() to repeat.
1: How many “chip” are there for one edge of texture map. (currently using 2x2 texture atlas, so the value is 2)
2: The type of voxel. I plan to use it to calculate the origin of the texture “chip” in UV space.
I don’t have any issue with #1. But the problem is #2. Currently in my JavaScript program, voxel has 32 bit ID. Lower 24bits (0x0 to 0x3ffff) are reserved for 24bit RGBA color.
So naturally, I tried to pass uint32 to vertex shader by using user defined attribute, then pass it to fragment shader by using varying variable. But it turned out OpenGL ES 2.0 only support float and vecX, matX. Then I tried to use default attribute vertex_color. But I realized if I turn on Vertex Color option in PlayCanvas Editor, the fragment shader seems to drop texture rendering code.
Am I in the right direction? Now I’m thinking it might be difficult to use both vertex color and texture at once.
Yes, currently vertex color and texture are mutually exclusive for each slot. Simple workaround would be to sample a map from another slot (e.g. diffuse vertex color + emissive map) or to supply your own map via material.setParameter. Do you use material.chunks to replace parts of shader code?
My interpretation of your two proposal is as follows. Would you confirm that I’m correct?
Option 1: Use “uniform sampler2D texture_emissiveMap” or some other texture slot within the fragment shader chunk code for vertex colored getAlbedo(). Reconstruct uint32 voxel type from varying vec4 vertex_color. getEmission() fragment shader chunk code should not refer texture_emissiveMap and return vec3(0, 0, 0) all the time.
Option 2: Add a pc.Texture as a new uniform shader variable. Then use it in the fragment shader chunk code for vertex colored getAlbedo(). Reconstruct uint32 voxel type from varying vec4 vertex_color.
Btw, there is shader type asset that you can load. It is even possible to do live shader reloading without need to reload launcher, if you use load event on asset to know when asset resource been changed, so to update shader/chank and material using this new resource.
This enhances shader editing a lot
Thank you Mr_ F and Max. I think I fully understand shader chunk concept. I will try Option 2 tonight.
I hope a complete shader chunk documentation and tutorial will become available soon!
(By the way, why PlayCanvas is designed so vertex color and texture are mutually exclusive?)
Today I tried method 2. But I found out that when I turn on vertex color option for the material in editor, the fragment shader doesn’t have vUv0 as varying anymore. If I turn off vertex color option, then I don’t get vertex_color attribute. So it seems impossible to just do option 2 straight. Is there any way to workaround this?
I also tried to add vertex color as user attribute by using pc.SEMANTIC_ATTR0 and turn off vertex color option of the material to get UV coordinate in the fragment shader while getting color (actually voxel ID) as a user attribute. But the engine says Vertex shader attribute “vertex_userData” is not mapped to a semantic in shader definition.
Since shader chunk shader definition is controlled by engine itself, it doesn’t seem I can update shader attribute semantic information. I’m now wondering what is the right way to use both vertex color (or a user attribute) along with UV coordinates in shader chunk system.
I traced the behavior of engine for option 2 (Add a user-defined attribute to the vertex shader). I think now I understand what is happening. But still I don’t know how can I effectively pass my voxel ID and UV coordinate effectively to my fragment shader. Let me explain.
Observed behavior
I set material.chunks.startVS to add a line “attribute vec4 vertex_userData” to the default shader chunk code. This attribute is the desired user attribute.
PlayCanvas engine calls ShaderChunks.createShaderFromCode(). This function calls shaderChunks.collectAttribs(vsCode) in it and it collects user defined attributes and update semantics information. But at this moment, modified startVS code isn’t in vsCode yet. So no user attribute semantics information is updated.
Upon first rendering, PlayCanvas engine calls pc.extend.updateShader() then eventually calls pc.programlib.phong.createShaderDefinition().
pc.programlib.phong.createShaderDefinition() scans the given custom chunks and constructs shader code and attribute semantics information. But unlike ShaderChunks.createShaderFromCode(), pc.programlib.phong.createShaderDefinition() doesn’t detect user defined attributes and doesn’t update semantics information.
Get the error message Vertex shader attribute “vertex_userData” is not mapped to a semantic in shader definition
My idea
I feel current shader chunk code doesn’t allow users to use user-defined vertex attributes. To enable user-defined vertex attributes in shader chunk, my proposal would be…
Define user-defined vertex attribute name such as “vertex_userattr0”
Update engine to detect the pre-defined name in pc.programlib.phong.createShaderDefinition()
Update engine to set semantics information for user-defined vertex attribute in pc.programlib.phong.createShaderDefinition()
There is no single way to combine vertex colors with maps (multiply? lerp?), so materials just either use one or another. It’s a bit awkward though, when you want to add your own way of combining to the shader, I agree…
Ah, right, You need at least one UV0-mapped texture for Uv0 to appear. So option 1 then would be easier.
After I read shader chunk code, I think it’s a neat system that let users to custom a shader while applying nice PlayCanvas PBR shader, particle shader and skin animation shader.
Thank you very much Mr_F. I can’t wait to apply metalness, normal and all other types of maps to voxels. It will be fancy!
As for the noise in between the voxel, I tried using texture2DGradEXT(). But I realized is that I can’t insert the #extension GL_EXT_shader_texture_lod directive at the top of the fragment shader code. Because current phong.js shader chunk doesn’t allow me to do so even when I modify chunk.basePS.
You will see the reason from this phong.js code. It might be a good idea to have chunk.extensionPS and chunk.extensionVS so that users can use some extensions if they want. I think it’s a good timing to switch from web editor to local development so that I can custom some of engine behavior and send a pull request to the master branch.