Hi @prelkin. Those are good questions, will go though them:
Models without some data, such as normals is not yet supported. Only uv1 (second) and color attributes are optional. There was a talk regarding this actually, to allow importing options on models which would allow to dismiss normals from source model. Although generating normals for it is not super-fast process in JS, and might be too costly for models with large number of vertices.
This is something was planned, but never had an urgent need to be implemented. Plus all material shader chunks, that rely on availability of such data would have to be expanded as well, new chunks created to support such shader branching option.
Regarding model format efficiency. Actually if you look at it in terms of json + gzip, it is giving fairly good results. We did some experiments with binary model formats, and just using binary - made no much difference in terms of size. So we need to do number of things: sorting to benefit from compression, use other techniques to reduce bit size for some normalized attributes without visually loosing on quality, and think about speed of parsing of model format in JS after it is downloaded - it is critical to be fast here, as complex parsing of large models will lead to thread blocking, and doing in a worker (another thread) will introduce another async latency to it.
glTF is not supported yet, and there is a ticket in engine for it.
Those are only thoughts and we did not started working on implementing any of it. Practically those are minor differences in real-world applications, where textures take most of download sizes.