Machine learning methods for fully automatic recognition of facial expressions and facial actions

Abstract
We present a systematic comparison of machine learning methods applied to the problem of fully automatic recog- nition of facial expressions. We explored recognition of facial ac- tions from the Facial Action Coding System (FACS), as well as recognition of full facial expressions. Each video-frame is first scanned in real-time to detect approximately upright-frontal faces. The faces found are scaled into image patches of equal size, con- volved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing recogni- tion engines, including AdaBoost, support vector machines, lin- ear discriminant analysis, as well as feature selection techniques. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training Support Vector Machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for recognition of full facial expres- sions in a 7-way forced choice was 93% correct, the best perfor- mance reported so far on the Cohn-Kanade FACS-coded expres- sion dataset. We also applied the system to fully automated fa- cial action coding. The present system classifies 18 action units, whether they occur singly or in combination with other actions, with a mean agreement rate of 94.5% with human FACS codes in the Cohn-Kanade dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics.

This publication has 12 references indexed in Scilit: