Learning Multiscale Active Facial Patches for Expression Analysis

Abstract
In this paper, we present a new idea to analyze facial expression by exploring some common and specific information among different expressions. Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around mouth, eye), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multitask sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches. In the first stage MTSL, expression recognition tasks are combined to located common patches. Each of the tasks aims to find dominant patches for each expression. Secondly, two related tasks, facial expression recognition and face verification tasks, are coupled to learn specific facial patches for individual expression. The two-stage patch learning is performed on patches sampled by multiscale strategy. Extensive experiments validate the existence and significance of common and specific patches. Utilizing these learned patches, we achieve superior performances on expression recognition compared to the state-of-the-arts.

This publication has 55 references indexed in Scilit: