From Emotions to Action Units with Hidden and Semi-Hidden-Task Learning
- 1 December 2015
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 3703-3711
- https://doi.org/10.1109/iccv.2015.422
Abstract
Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units) for which samples are not available but, in contrast, training data is easier to obtain from a set of related Visible-Tasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.Keywords
This publication has 29 references indexed in Scilit:
- Unsupervised Domain Adaptation for Personalized Facial Emotion RecognitionPublished by Association for Computing Machinery (ACM) ,2014
- Challenges in Representation Learning: A Report on Three Machine Learning ContestsPublished by Springer Science and Business Media LLC ,2013
- Action unit detection using sparse appearance descriptors in space-time video volumesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- Presentation and validation of the Radboud Faces DatabaseCognition and Emotion, 2010
- The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expressionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2010
- Automatically detecting action units from faces of pain: Comparing shape and appearance featuresPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Toward Practical Smile DetectionIEEE Transactions on Pattern Analysis and Machine Intelligence, 2009
- A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous ExpressionsIEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
- Boosting Coded Dynamic Features for Facial Action Units and Facial Expression RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007
- Components and recognition of facial expression in the communication of emotion by actors.Journal of Personality and Social Psychology, 1995