So I’m working on a rudimentary facial animation script based on sound.
https://launch.playcanvas.com/579047?debug=true
The premise is simple:
- using the visualizer tutorial, I’m taking the
freqData
from the audio to get 0 - 1 values - these values are passed as
.setWeight()
to theMouthOpen
blendshape of my model. - I’m using the
tween.js
library to make the movements less jagged.update()
callsrenderData()
, which starts a 0.5 sec animation from current weight to new weight, which prevents any other animation from happening until the previous one has completed.
Although I have movement, I’m looking for a bit of help on how to filter the audio or tweak the values that are coming out of the freqData
so that the mouth movements are more precise.
For example, there are moments of silence in the track where the model still has her mouth open. Technically that should throw a 0 value and close the mouth.
Questions:
- Is there a way to smooth out the data? Maybe I can rely less on tween and just
setWeight
per value. - Is there a way of only getting the more relevant peaks (imagine having a CONTRAST slider for the sound wave)
- EXTRA POINTS: Are there any libraries out there that might help me to better capture data from the audio which might help to move into more detailed animation (ex: Can we detect an “O” sound and morph accordingly?)
I tried working with tone.js but got stuck on how to pass the audio data to this library.
They do:
var player = new Tone.Player({
"url" : "./audio/FWDL.[mp3|ogg]",
"loop" : true
}).connect(meter).toMaster();
but I grab my audio as:
var slot = this.entity.sound.slot("track");
slot.setExternalNodes(this.analyser);
I could probably pass an asset
value to that URL, but I’m not sure I want to transfer all audio control to that library. Thoughts?