Epipolar Constraints for Vision-Aided Inertial Navigation
- 1 January 2005
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2, 221-228
- https://doi.org/10.1109/acvmot.2005.48
Abstract
This paper describes a new method to improve inertial navigation using feature-based constraints from one or more video cameras. The proposed method lengthens the period of time during which a human or vehicle can navigate in GPS-deprived environments. Our approach integrates well with existing navigation systems, because we invoke general sensor models that represent a wide range of available hardware. The inertial model includes errors in bias, scale, and random walk. Any purely projective camera and tracking algorithm may be used, as long as the tracking output can be expressed as ray vectors extending from known locations on the sensor body. A modified linear Kalman filter performs the data fusion. Unlike traditional SLAM, our state vector contains only inertial sensor errors related to position. This choice allows uncertainty to be properly represented by a covariance matrix. We do not augment the state with feature coordinates. Instead, image data contributes stochastic epipolar constraints over a broad baseline in time and space, resulting in improved observability of the IMU error states. The constraints lead to a relative residual and associated relative covariance, defined partly by the state history. Navigation results are presented using high-quality synthetic data and real fisheye imagery.Keywords
This publication has 13 references indexed in Scilit:
- Visually augmented navigation in an unstructured environment using a delayed state historyPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2004
- Panoramic mosaicing with a 180° field of view lensPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Eye design in the plenoptic space of light raysPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Mapping Partially Observable Features from Multiple Uncertain Vantage PointsThe International Journal of Robotics Research, 2002
- Structure from motion causally integrated over timeIeee Transactions On Pattern Analysis and Machine Intelligence, 2002
- Scalable Extrinsic Calibration of Omni-Directional Image NetworksInternational Journal of Computer Vision, 2002
- Optimal motion estimation from multiple images by normalized epipolar constraintCommunications in Information and Systems, 2001
- Fast and globally convergent pose estimation from video imagesIeee Transactions On Pattern Analysis and Machine Intelligence, 2000
- A Combined Corner and Edge DetectorPublished by British Machine Vision Association and Society for Pattern Recognition ,1988
- A computer algorithm for reconstructing a scene from two projectionsNature, 1981