A vision-based approach for autonomous landing

Abstract
Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.

This publication has 10 references indexed in Scilit: