An Ensemble of Invariant Features for Person Reidentification

Abstract
This paper proposes an ensemble of invariant features (EIFs), which can properly handle the variations of color difference and human poses/viewpoints for matching pedestrian images observed in different cameras with nonoverlapping field of views. Our proposed method is a direct reidentification (re-id) method, which requires no prior domain learning based on prelabeled corresponding training data. The novel features consist of the holistic and region-based features. The holistic features are extracted by using a publicly available pretrained deep convolutional neural network used in generic object classification. In contrast, the region-based features are extracted based on our proposed two-way Gaussian mixture model fitting, which overcomes the self-occlusion and pose variations. To make a better generalization during recognizing identities without additional learning, the ensemble scheme aggregates all the feature distances using the similarity normalization. The proposed framework achieves robustness against partial occlusion, pose, and viewpoint changes. Moreover, the evaluation results show that our method outperforms the state-of-the-art direct re-id methods on the challenging benchmark viewpoint invariant pedestrian recognition and 3D people surveillance data sets.

This publication has 37 references indexed in Scilit: