Counter a Drone in a Complex Neighborhood Area by Deep Reinforcement Learning
Open Access
- 18 April 2020
- Vol. 20 (8), 2320
- https://doi.org/10.3390/s20082320
Abstract
Counter-drone technology by using artificial intelligence (AI) is an emerging technology and it is rapidly developing. Considering the recent advances in AI, counter-drone systems with AI can be very accurate and efficient to fight against drones. The time required to engage with the target can be less than other methods based on human intervention, such as bringing down a malicious drone by a machine-gun. Also, AI can identify and classify the target with a high precision in order to prevent a false interdiction with the targeted object. We believe that counter-drone technology with AI will bring important advantages to the threats coming from some drones and will help the skies to become safer and more secure. In this study, a deep reinforcement learning (DRL) architecture is proposed to counter a drone with another drone, the learning drone, which will autonomously avoid all kind of obstacles inside a suburban neighborhood environment. The environment in a simulator that has stationary obstacles such as trees, cables, parked cars, and houses. In addition, another non-malicious third drone, acting as moving obstacle inside the environment was also included. In this way, the learning drone is trained to detect stationary and moving obstacles, and to counter and catch the target drone without crashing with any other obstacle inside the neighborhood. The learning drone has a front camera and it can capture continuously depth images. Every depth image is part of the state used in DRL architecture. There are also scalar state parameters such as velocities, distances to the target, distances to some defined geofences and track, and elevation angles. The state image and scalars are processed by a neural network that joints the two state parts into a unique flow. Moreover, transfer learning is tested by using the weights of the first full-trained model. With transfer learning, one of the best jump-starts achieved higher mean rewards (close to 35 more) at the beginning of training. Transfer learning also shows that the number of crashes during training can be reduced, with a total number of crashed episodes reduced by 65%, when all ground obstacles are included.Keywords
This publication has 9 references indexed in Scilit:
- Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer LearningIEEE Access, 2020
- Drones Chasing Drones: Reinforcement Learning and Deep Search Area ProposalDrones, 2019
- A Distributed Approach for Collision Avoidance between Multirotor UAVs Following Planned MissionsSensors, 2019
- Countering UAVs – the Mover of Research in Military TechnologyDefence Science Journal, 2018
- Cascade^CNN: Pushing the Performance Limits of Quantisation in Convolutional Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2018
- A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving PlatformJournal of Intelligent & Robotic Systems, 2018
- Optimal and Autonomous Control Using Reinforcement Learning: A SurveyIEEE Transactions on Neural Networks and Learning Systems, 2017
- Human-level control through deep reinforcement learningNature, 2015
- Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory.Psychological Review, 1995