Hi all,
I’m playing with adding a “lip-sync” function to an app.
I have a set of mouths and want to sync them according to a spoken sentence, which is recorded by the user.
I see two options:
-
A simple sync, where the volume of the sound on each frame determines how much the mouth should be opened.
-
A more advanced sync, where the audio is analyzed using phonemes (more like SmartMouth in Flash). So there would be different mouths for “e” and “o” for example.
Being an animator I’m very used to manual lipsyncing, and I certainly know the limitations of all automatic processes, but it would be of some use in this app.
If anyone has any clue to where I could start looking, or of similar examples, I’d be most grateful. I’ve searched around here but didn’t find anything yet.
[import]uid: 149668 topic_id: 28120 reply_id: 328120[/import]