Energy-efficient RL-based aerial network deployment testbed for disaster areas

Abstract
Rapid deployment of wireless devices with 5G and beyond enabled a connected world. However, an immediate demand increase right after a disaster paralyzes network infrastructure temporarily. The continuous flow of information is crucial during disaster times to coordinate rescue operations and identify the survivors. Communication infrastructures built for users of disaster areas should satisfy rapid deployment, increased coverage, and availability. Unmanned air vehicles (UAV) provide a potential solution for rapid deployment as they are not affected by traffic jams and physical road damage during a disaster. In addition, ad-hoc WiFi communication allows the generation of broadcast domains within a clear channel which eases one-to-many communications. Moreover, using reinforcement learning (RL) helps reduce the computational cost and increases the accuracy of the NP-hard problem of aerial network deployment. To this end, a novel flying WiFi ad-hoc network management model is proposed in this paper. The model utilizes deep-Q-learning to maintain quality-of-service (QoS), increase user equipment (UE) coverage, and optimize power efficiency. Furthermore, a testbed is deployed on Istanbul Technical University (ITU) campus to train the developed model. Training results of the model using testbed accumulates over 90% packet delivery ratio as QoS, over 97% coverage for the users in flow tables, and 0.28 KJ/Bit average power consumption.