Smart Phone Based Human Activity Recognition

Abstract
Human Activity Recognition (HAR) is a field that uses collected data to classify different human actions. One simple and general approach to HAR is to use the sensor data from a mobile device to recognize different patterns behind complex motions. Early studies show promising results on simple activities using manually selected features from accelerometer readings. As newer publicly available datasets include more complex data and activities, manual feature selection have become cumbersome, impractical and face limitations in finding the optimal feature sets for HAR. In this paper, we present an empirical approach to defining models of 3D tensor data structures from 2D time series data obtained from multiple sensors on a smart phone, and a new Convolutional Neural Network (CNN) model, which uses the tensor data and performs automatic feature extraction and classification for HAR. We use the public benchmark dataset, MobiAct v2.0, to train and validate our model, which achieved an overall better performance in classifying 11 Activities of Daily Living (ADL) than the state-of-the-art approaches. Compared to the approach presented by Chatzaki et al. which has a very high rate of misclassifications for car-step out (CSO), car-step in (CSI), sit to stand (CHU), and stand to sit (SCH) classes, our proposed approach has 15% higher sensitivity for each of these activities with the optimal number of training epochs being only 25.

This publication has 18 references indexed in Scilit: