New Search

Export article
Open Access

Scale-invariant localization using quasi-semantic object landmarks

Published: 25 February 2021
Autonomous Robots , Volume 45, pp 407-420; doi:10.1007/s10514-021-09973-w

Abstract: This work presents Object Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. An Object Landmark consists of a bounding box $${\mathbf {b}}$$ b defining an object, a descriptor $${\mathbf {q}}$$ q of that object produced by a Convolutional Neural Network, and a set of classical point features within $${\mathbf {b}}$$ b . We evaluate Object Landmarks on visual odometry and place-recognition tasks, and compare them against several modern approaches. We find that Object Landmarks enable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Keywords: Visual odometry / Place recognition / Visual features / Robotic localization

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

Share this article

Click here to see the statistics on "Autonomous Robots" .
References (42)
    Back to Top Top