Hierarchical Integration of Rich Features for Video-Based Person Re-Identification

Abstract
Person re-identification (ReID) aims to associate the identity of pedestrians captured by cameras across nonoverlapped areas. Video-based ReID plays an important role in intelligent video surveillance systems and has attracted growing attention in recent years. In this paper, we propose an end-toend video-based ReID framework based on the convolutional neural network (CNN) for efficient spatio-temporal modeling and enhanced similarity measuring. Specifically, we build our descriptor of sequences by basic mathematical calculations on the semantic mid-level image features, which avoids the time consuming computations and loss of spatial correlations. We further hierarchically extract image features from multiple intermediate CNN stages to build multi-level sequence descriptors. For a descriptor at one stage, we design an effective auxiliary pairwise loss which is jointly optimized with a triplet loss. To integrate hierarchical representation, we propose an intuitive yet effective summation based similarity integration scheme to match identities more accurately. Furthermore, we extend our framework by a multi-model ensemble strategy, which effectively assembles three popular CNN models to represent walking sequences more comprehensively and improve the performance. Extensive experiments on three video-based ReID datasets show that the proposed framework outperforms the state-of-the-art methods.
Funding Information
  • National Basic Research Program of China (2016YFB1001002)

This publication has 80 references indexed in Scilit: