Tricks to Decrease Morph Target Sizes?


I’m trying to decrease the size of glTF models that use morph targets (we just use morph targets directly, no animations). Draco looks amazing for models, but it doesn’t support morph targets sadly. Meshopt also looks amazing and is supposed to support morph targets, but it’s creating broken models for me.

What I don’t understand about morph targets is in the models I’m using, they’re doubling the size of the GLB file. As a developer, I would expect the model would have the start and stop positions (all the morphs are X, Y, or Z translate / scaling) and the rest would be interpolated. For example, if I remove the morphs the file size drops from 300 KB to 150 KB.

Any suggestions or recommended tools?

You’re right on draco - it does not support morph targets, and for Meshopt - we need to develop it, it’s not a small job overall

In the meantime, the best you can do is to make sure your morph targets are exported as sparse … that means only vertices that are morphed are exported, instead of having lots of zeros in the data. And also perhaps make sure morph normals are not exposed if you don’t need them.

But going from 150kb to 300kb does not sound unreasonable if you have few different morph targets for the whole mesh. Normally the vertex size is maybe 32 bytes (position: 3x4, normal: 3x4, uv: 2x4). Each morph target for a vertex adds 12 bytes to it … so with about 3 morph targets per vertex, your vertex size would double.


I remember a friend of mine used an ACL library:

Which has a Wasm build via Emscripten:

You can test it in this viewer:


It is strange, I don’t have such a problem with morphs.
As an example, I’ve exported a character with a skeleton + body mesh + face mesh + 10 morph targets for the face mesh.
The whole character is 7,5 Mb (with textures). But without the face mesh and all its morphs, it is 5 Mb.
So, all morphs cost me 2,5 Mb. Without any compression.
I don’t think it is too heavy.

What software do you use to export your model? Maybe it is exporting some unnecessary data?


Thanks, that is a good point about the sizing. Choosing to not export morph normals in blender did same me ~30 KB. I thought it was an unexpected anomaly, but sounds like that might be expected. If you guys are able to one day support meshopt, that would be absolutely amazing. I’m really surprised it’s competitive with Draco.

Do you have any recommendations how to export morphs targets as sparse? I’m trying to find information about using a CLI tool or a Blender plugin to accomplish that, but I’m coming up empty-handed, I might be searching with the wrong keywords.

Thank you for the link to the JS project. I did see the ACL repo, but the C++ headers only and client-side decompression scared me off. I’ll give the acl-js a try!

1 Like

Actually, I think that confirms what I’m working with is actually kind of expected. So, thanks for the comparison numbers. I’m working with a relatively uncomplex model that uses morph targets heavily. I’m getting an FBX from a 3D modeler that works in Maya, just trying to see what kind of optimizations I can make on my end. But, it sounds like the 150KB to 300KB jump from morphs isn’t as unusual as I initially though.

To provide an update to help anyone coming across this thread:


I looked more at ACL-JS (ACL for JS), and it looks like that utility is just for animations. If you’re directly manipulating morph targets without any animations, ACL won’t be able to help.

glTF Sparse Accessors

At last I better understand what “sparse” means with gltf. In glTF 2.0, there are now “sparse accessors”, also called sparse arrays. These are meant to remove the repeated zeros you may find in morph targets. I’d love to have better control of morph targets considering in my use-case I’m interpolating at regular intervals on a linear curve. Kronos has a good explanation of “accessors” and “sparse accessors” in the following PDF on page 3:

Sparse Accessors Support

Sparse accessors are a great for more efficient morph targets. Many glTF viewers starting in 2017 began to support this in models. However, the problem is I have yet to find a tool that can take a model with morph targets and encode it using sparse accessors. For example,
gltf-pipeline does not support this yet:
fbx2gltf does not support this yet:
Blender’s glTF importer/exporter supports importing Sparse Accessors, but will not encode your model with sparse accessors if your model didn’t already have them.


In the fbx2gltf issue, it was mentioned that Draco makes these “zeros” less of a problem, so I’m thinking GZIP might also actually make that less of an issue as well. For my 275 KB model, when gzip’d it was reduced to 72KB. So, sparse accessors might not be that essential with that consideration.

How Morph Targets Size Up

The nice thing about glTFs is that they use JSON, so you can easily look at what’s going on in the file. So, using an online JSON viewer I was able to look at the makeup of a random mesh in the model. The mesh below can morph along the X axis and the Z axis.

  • Position: 1158x * 12 B (Vec3) = 14KB
  • Normal: 1158x * 12 B (Vec3) = 14 KB
  • Texcoord_0: 1158x * 8 B (Vec2) = 9 KB
  • Indices: 2964x * 2 B (Scalar) = 6 KB
  • Morph Target (Base Positions): 1158x * 12 B (Vec3) = 14 KB
  • Morph Target (Positions morphed along the X axis): 1158x * 12B (Vec3) = 14 KB
  • Morph Target (Positions morphed along the Z axis): 1158x * 12B (Vec3) = 14 KB

Total Mesh Size: 71 KB

The meshes vary, but in general the above shows that morphs account for 42 KB of 71 KB (60%) in this particular mesh. So, it isn’t that crazy that the file size doubled. I thought the file size was bloated due to extra animation key frames, but turns out to not be true and the file is indeed only storing the morph extremes.


Update #2 - This is what I eventually did:

After much pain and suffering I got meshopt compression working on a GLB model that uses morphs.

With gltfpack you have to use -noq (no quantization) since PlayCanvas doesn’t support that yet. That option is documented in the -h help CLI output, but is missing from the gltfpack README, which tripped me up.

Having the PlayCanvas viewer for reference (which implements meshopt) helped a lot. Morphs out of the box seem to work perfectly fine, but I did have a weird issue where model materials that did not have an image set on the normal map (before sending it through gltfpack), would not show a diffuse texture in PlayCanvas. I haven’t yet figured out if that’s an issue with my PlayCanvas code, how the model is built in Maya, PlayCanvas’ implementation, or Meshopt’s implementation. All I know is sending it through gltfpack without any compression (no -c) or quantization (-noq) would break the diffuse textures and not sending it through gltfpack at all would work fine. Currently, seeing a 2.5x reduction in model size, which I’m thrilled with. (The model is now ~110 KB from 275 KB, but 80 KB - 90 KB looks possible)

Because of the odd texture issue I’ve found this workflow works best for me converting FBX files:

  1. Open .FBX in Blender, and give materials a 4x4 dummy image for the normal map if they don’t have one.
  2. Export as .FBX (instead of .GLB because FBX2GLTF has better compression)
  3. Convert to .GLB with FBX2GLTF.
  4. Compress with gltfpack using -noq and -cc. (I experimented with -si to simplify meshes, but slight defects were not compatible with my use-case)
1 Like

Would be worth checking with other viewers like Babylon’s

1 Like

Good idea. It took me a little while to figure out Babylon’s editor, but I have it optionally loading a GLB with or without using meshopt. The models don’t come with textures, this was done intentionally. I’m then setting the albedo texture on a material using the UI.

Without meshopt
If I add a texture to the “OEP_v4:lambert2” material, it looks fine:

With meshopt
If I add a texture to the “OEP_v4:lambert2” material, it turns black and doesn’t show any textures:

Granted that is the meshopt model without me doing the thing where I give the materials in the FBX a normal map.

PS: If you guys need motiviation for the KHR_quantization support, Babylon.JS supports it and I was able to further reduce the model size to 42 KB!

Here is the Babylon scene:
Download that if the model helps you guys debug because I’ll be taking those models offline later today.


Fantastic info @Chris, thanks for the invstigation. I didn’t know of -noq either … that’s great news that it works with the option, we can possibly add meshopt support to the engine even now for meshes without quantization). Great news!

1 Like

FYI for anyone coming across this thread:

KHR_quantization support was recently added to PlayCanvas in v1.46.0. -noq is no longer needed!