Fusion of Inertial and Depth Sensor Data for Robust Hand Gesture Recognition

Abstract
This paper presents the first attempt at fusing data from inertial and vision depth sensors within the framework of a hidden Markov model for the application of hand gesture recognition. The data fusion approach introduced in this paper is general purpose in the sense that it can be used for recognition of various body movements. It is shown that the fusion of data from the vision depth and inertial sensors act in a complementary manner leading to a more robust recognition outcome compared with the situations when each sensor is used individually on its own. The obtained recognition rates for the single hand gestures in the Microsoft MSR data set indicate that our fusion approach provides improved recognition in real-time and under realistic conditions.

This publication has 19 references indexed in Scilit: