Trilateration Positioning Using Hybrid Camera–LiDAR System with Spherical Landmark Surface Fitting

Abstract
Navigation in Global Positioning System–denied environments is notoriously difficult for small unmanned aerial vehicles due to reduction of visible satellites and urban canyon multipath interference. Several existing methods can be used for navigating in a constrained environment, but they often require additional specific sensing hardware for a localization solution or only provide local frame navigation. Autonomous systems often include LiDAR and RGB cameras for mapping, sensing, or obstacle avoidance. Utilizing these sensors for navigation could provide the only or complimentary localization solutions to other Global Positioning System–denied localization methods in a global or local frame, especially in urban canyons where unique landmarks can be identified. Information from scanning LiDAR can be correlated with camera pixel coordinates and used to range unique visual landmarks that have known locations. The present work included surface function fitting to reduce ranging error to spherical landmarks since multiple lasers were able to range each landmark. Simulation and experimental validation of the unique camera–LiDAR modified trilateration process was undertaken using colored light orbs as landmarks with a 16-laser scanning LiDAR and known positions. Position error was computed and verified that the position estimate process was successful at varying landmark configurations and viewing angles in simulation. Experimental results verified the process while also providing higher accuracy than a previous method of using a single point on landmark surfaces, for the tested setup.
Funding Information
  • Ohio Federal Research Network (The Ohio Federal Research Network Round 5 SOARING)