I have an entity with a Model Component and gave it a box type.
Now, I see we can view a mesh’s vertex data via .vertexBuffer and I see, among some other data, that this buffer contains 24 vertices, but I don’t see how I can get position data.
What I’d like to do is get the vertex positions from the buffer and pass it as a parameter to my vertex shader to draw the box, instead of a rectangle.
I don’t quite understand what pc.SEMANTIC_POSITION is for either, but it seems to default to a rectangle? Do I need use that constant per se, or is it just a helper?
I’d like to have my vertex shader to generate a box.
Can I do this with the information of an entity’s mesh?
In less detail
When I give this entity model a basic shader program,
it looks as if it destroys the original box geometry, and generates something else.
It just looks like a rectangluar plane, or some weird orthographic projection.
Engine does most of matrix, attributes and other stuff passing to shaders, and updates their transforms.
Here is codepen example, copied it from github page.
Don’t go ahead of your understanding. You have to get idea of what is going on and what you are doing. Rushing too fast - you will get into weird states where lack of knowledge from different angles will lead to state of being lost.
Ask your questions in specifics, providing actual code and stating what trouble is faced. Generic questions have as generic answers, and learning is never easy, but going slowly but steadily - is better than running too far forward without good understanding.
But, I still have this lingering question about SEMANTIC_POSITION.
What technical difference does it make when using the constant SEMANTIC_POSITION versus SEMANTIC_NORMAL in the shader definition attributes?
The documentation says: Vertex attribute to be treated as a position. So to my understanding this means this means the engine will pass the vertex positions to SEMANTIC_POSITION and the vertex normals to SEMANTIC_NORMAL, etc. Is that correct?
Vertex buffer will have position, normals, and some other attributes available. Then this attributes should be pointed out to GPU at which offsets within vertex buffers they are, and what data types they are using.
They are passed automatically, so that in vertex shader you can access then without worrying about it.
I don’t really know why you digging into such low level stuff, usually you don’t need to deal with those things at all, unless you need to change some internals or add your own attribute type, etc.
Yea I know. It’s just that I’m used to do low level opengl stuff and creating buffers myself.
I never really worked with a game engine, only opengl libraries such as glfw.
I’m beginning to understand the engine better and better.
Anyway, this makes life a little easier.
Do remember that engine is Open Source, and when you feel comfortable with it, and feel like there is good improvement to it can be made, then good conversation and Pull Requests - are always welcome