Moving Object Detection for Airport Scene Using Patterns of Motion and Appearance

Abstract
This paper presents a novel method for localization and recognition of moving objects in a real airport surface scene. Different from the traditional applications, moving object detection (MOD) in the airport surface is more challenging because the background is an open outdoor environment, which means that the target objects are usually low in resolution and the MOD task is vulnerable to many undesired changes, such as cloud movement and illumination variations. To address these issues, this paper proposes a unified and effective deep-learning-based MOD architecture, which combines both appearance and motion cues. Specifically, a novel moving region proposal generation module is first designed, which can effectively locate the regions of moving object based on the motion information. Meanwhile, a novel cascade multilayer feature fusion module with transposed convolution is applied to produce both enriched-semantics and fine-resolution convolutional feature maps for category recognition. Finally, a large-scale dataset acquired by the daily surveillance videos of a real airport surface is manually constructed. Results show that the proposed methods outperform state-of-the-art solutions in extracting moving objects from airport surface scenes.
Funding Information
  • National Natural Science Foundation of China (Grant No. U1933134)
  • Si-Chuan Science and Technology Program (No. 2020YFG0134)
  • Si-Chuan University Grant (No. 2020SCUNG205)

This publication has 24 references indexed in Scilit: