Indoor localization and visualization using a human-operated backpack system

Abstract
Automated 3D modeling of building interiors is useful in applications such as virtual reality and entertainment. Using a human-operated backpack system equipped with 2D laser scanners and inertial measurement units (IMU), we develop scan matching based algorithms to localize the backpack in complex indoor environments such as a T-shaped corridor intersection, a staircase, and two indoor hallways from two separate floors connected by a staircase. When building 3D textured models, we find that the localization resulting from scan matching is not pixel accurate, resulting in misalignment between successive images used for texturing. To address this, we propose an image based pose estimation algorithm to refine the results from our scan matching based localization. Finally, we use the localization results within an image based renderer to enable virtual walkthroughs of indoor environments using imagery from cameras on the same backpack. Our renderer uses a three-step process to determine which image to display, and a RANSAC framework to determine homographies to mosaic neighboring images with common SIFT features. In addition, our renderer uses plane-fitted models of the 3D point cloud resulting from the laser scans to detect occlusions. We characterize the performance of our image based renderer on an unstructured set of 2709 images obtained during a five minute backpack data acquisition for a T-shaped corridor intersection.

This publication has 23 references indexed in Scilit: