An associate-predict model for face recognition

Abstract
Handling intra-personal variation is a major challenge in face recognition. It is difficult how to appropriately measure the similarity between human faces under significantly different settings (e.g., pose, illumination, and expression). In this paper, we propose a new model, called “Associate-Predict” (AP) model, to address this issue. The associate-predict model is built on an extra generic identity data set, in which each identity contains multiple images with large intra-personal variation. When considering two faces under significantly different settings (e.g., non-frontal and frontal), we first “associate” one input face with alike identities from the generic identity date set. Using the associated faces, we generatively “predict” the appearance of one input face under the setting of another input face, or discriminatively “predict” the likelihood whether two input faces are from the same person or not. We call the two proposed prediction methods as “appearance-prediction” and “likelihood-prediction”. By leveraging an extra data set (“memory”) and the “associate-predict” model, the intra-personal variation can be effectively handled. To improve the generalization ability of our model, we further add a switching mechanism - we directly compare the appearances of two faces if they have close intra-personal settings; otherwise, we use the associate-predict model for the recognition. Experiments on two public face benchmarks (Multi-PIE and LFW) demonstrated that our final model can substantially improve the performance of most existing face recognition methods.

This publication has 27 references indexed in Scilit: