Principal components of expressive speech animation
- 13 November 2002
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
In this paper, we describe a new technique for expressive and realistic speech animation. We use an optical tracking system that extracts the 3D positions of markers attached at the feature point locations to capture the movements of the face of a talking person. We use the feature points as defined by the MPEG-4 standard. We then form a vector space representation by using the Principal Component Analysis of this data. We call this space "expression and viseme space". Such a representation not only offers insight into improving realism of animated faces, but also gives a new way of generating convincing speech animation and blending between several expressions. As the rigid body movements and deformation constraints on the facial movements have been considered through this analysis, the resulting facial animation is very realistic.Keywords
This publication has 9 references indexed in Scilit:
- Feature Point Based Mesh Deformation Applied to MPEG-4 Facial AnimationPublished by Springer Science and Business Media LLC ,2001
- Multi-modal Speech Synthesis with ApplicationsPublished by Springer Science and Business Media LLC ,1999
- Modeling Coarticulation in Synthetic Visual SpeechPublished by Springer Science and Business Media LLC ,1993
- Physically‐based facial modelling, analysis, and animationThe Journal of Visualization and Computer Animation, 1990
- Animating speech: an automated approach using speech synthesised by rulesThe Visual Computer, 1988
- Motion UnderstandingPublished by Springer Science and Business Media LLC ,1988
- A muscle model for animation three-dimensional facial expressionACM SIGGRAPH Computer Graphics, 1987
- Principal Component AnalysisSpringer Series in Statistics, 1986
- Parameterized Models for Facial AnimationIEEE Computer Graphics and Applications, 1982