Fully Automatic Blendshape Generation for Stylized Characters

Published in 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), 2023

Avatars are one of the most important elements in virtual environments. Real-time facial retargeting technology is of vital importance in AR/VR interactions, the filmmaking, and the entertainment industry, and blendshapes for avatars are one of its important materials. Previous works either focused on the characters with the same topology, which cannot be generalized to universal avatars, or used optimization methods that have high demand on the dataset. In this paper, we adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We qualitatively and quantitatively compared our method with both classical and learning-based methods. The results revealed that the blendshapes generated by our method achieves higher similarity to the groundtruth blendshapes than the state-of-art methods. We also demonstrated that our method can be applied to expression transfer for stylized characters with different topologies.

Recommended citation: Wang, J., Qiu, Y., Chen, K., Ding, Y., & Pan, Y. (2023, March). Fully Automatic Blendshape Generation for Stylized Characters. In 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) (pp. 347-355). IEEE.
Download Paper