Emotional voice puppetry

Published in IEEE Transactions on Visualization and Computer Graphics, 2023

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dynamics are established by category of the emotion and the intensity. Our approach is exclusive because it takes account of perceptual validity and geometry instead of pure geometric processes. Another highlight of our approach is the generalizability to multiple characters. The findings showed that training new secondary characters when the rig parameters are categorized as eye, eyebrows, nose, mouth, and signature wrinkles is significant in achieving better generalization results compared to joint training. User studies demonstrate the effectiveness of our approach both qualitatively and quantitatively. Our approach can be applicable in AR/VR and 3DUI, namely virtual reality avatars/self-avatars, teleconferencing and in-game dialogue.

Recommended citation: Pan, Y., Zhang, R., Cheng, S., Tan, S., Ding, Y., Mitchell, K., & Yang, X. (2023). Emotional voice puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2527-2535.
Download Paper