SEMANTIC_POSITION with uint type

The engine supports positions in standart material the Uint32 | Uint16 | Uint8?

    private _buildVertexFormat(store: Uint32Array | Uint16Array) {

        const vertexDesc = [];
        vertexDesc.push({
            semantic: pc.SEMANTIC_POSITION,
            components: 2,
            type: store instanceof Uint32Array ? pc.TYPE_UINT32 : pc.TYPE_UINT16,
            asInt: true
        });

        return new pc.VertexFormat(this._app.graphicsDevice, vertexDesc, store.length / 2);
    }

   ...

    const format = this._buildVertexFormat(patchBuf.store);
    const vertexBuffer = new pc.VertexBuffer(app.graphicsDevice, format, format.vertexCount, {
        usage: pc.BUFFER_STATIC,
        storage: true,
        data: patchBuf.store,
    });

image

StandardMaterial generates shaders it needs and it uses vec3 for the vertex_position.
Custom mesh formats are for custom shaders, not for standard material shaders.

Could you tell me how to implement the functionality of light and shadows in a custom from a standard shader ?

I don’t think there is any easy way to do this.

Maybe you could try to use standard material, and override some chunks that are used to build the shader. This one would let you customize the position attribute: engine/src/scene/shader-lib/chunks/common/vert/transformDecl.js at main · playcanvas/engine · GitHub

and this one its usage: engine/src/scene/shader-lib/chunks/common/vert/transform.js at main · playcanvas/engine · GitHub

But not that this is not documented, and might not be super straightforward to get going.

What are you trying to do?

Yes, I will redefine these chunks, but the task is to get rid of normals and uvs in the shader and calculate them.

The task is as follows: I have a grid with dimensions width by depth, there is a size parameter scale.

        offsetX = ...;
        offsetZ = ...;
        const coords = new Uint32Array();
        let index = 0;
        for (let z = 0; z < depth; z++) {
            for (let x = 0; x < width; x++) {
                coords[index * 2]       = offsetX + x;
                coords[index * 2 + 1] = offsetZ + z;
                index++;
            }
        }

baseVS:

    attribute ivec2 vertex_position;

    uniform mat4 matrix_viewProjection;
    uniform mat4 matrix_model;
    uniform mat3 matrix_normal;

    vec3 dPositionW;
    mat4 dModelMatrix;
    mat3 dNormalMatrix;

transformVS:

    uniform float uTerrainWidth;
    uniform float uTerrainDepth;
    uniform float uWorldScale;

    uniform float uMinHeight;
    uniform float uMaxHeight;

    uniform sampler2D uHeightMap;

    float calcHeightFromHeightMap(vec2 coord)
    {
        float demMinMax = uMaxHeight - uMinHeight;
        vec4 heightMap = texture2D(uHeightMap, coord);
        float coef = (heightMap.r * 255.0 + heightMap.g * 255.0 + heightMap.b * 255.0) / 3.0 / (heightMap.a * 255.0);
        float height = uMinHeight + demMinMax * coef;
        return height;
    }

    mat4 getModelMatrix()
    {
        return matrix_model;
    }
    
    vec4 getPosition()
    {
        dModelMatrix = getModelMatrix();

        float x = float(vertex_position.x);
        float z = float(vertex_position.y);
        vec2 uv = vec2(x / uTerrainWidth, z / uTerrainDepth);
        float height = calcHeightFromHeightMap(uv);
        vec3 localPos = vec3(x * uWorldScale, height, z * uWorldScale);

        vec4 posW = dModelMatrix * vec4(localPos, 1.0);

        dPositionW = posW.xyz;

        vec4 screenPos = matrix_viewProjection * posW;
        return screenPos;
    }

    vec3 getWorldPosition()
    {
        return dPositionW;
    }
1 Like

I think you need to override this as well:

and make sure it does not use ‘vertex_normal’ string in it, as that would automatically include the attribute vertex_normal;

Maybe this too

Oh, yes i make this


export const normalVS = /** @type glsl */`
    vec3 getNormal() {
        dNormalMatrix = matrix_normal;
        vec3 tempNormal = vec3(0.0, 1.0, 0.0); // TODO
        return normalize(dNormalMatrix * tempNormal);
    }
`;