Simultaneous Motion and Structure Estimation by Fusion of Inertial and Vision Data

Abstract
For mobile robotics, head gear in augmented reality (AR) applications or computer vision, it is essential to continuously estimate the egomotion and the structure of the environment. This paper presents the system developed in the SmartTracking project, which simultaneously integrates visual and inertial sensors in a combined estimation scheme. The sparse structure estimation is based on the detection of corner features in the environment. From a single known starting position, the system can move into an unknown environment. The vision and inertial data are fused, and the performance of both Unscented Kalman filter and Extended Kalman filter are compared for this task. The filters are designed to handle asynchronous input from visual and inertial sensors, which typically operate at different and possibly varying rates. Additionally, a bank of Extended Kalman filters, one per corner feature, is used to estimate the position and the quality of structure points and to include them into the structure estimation process. The system is demonstrated on a mobile robot executing known motions, such that the estimation of the egomotion in an unknown environment can be compared to ground truth.

This publication has 16 references indexed in Scilit: