Deep Reinforcement Learning Based Resource Management in UAV-Assisted IoT Networks
Open Access
- 1 March 2021
- journal article
- research article
- Published by MDPI AG in Applied Sciences
- Vol. 11 (5), 2163
- https://doi.org/10.3390/app11052163
Abstract
The resource management in wireless networks with massive Internet of Things (IoT) users is one of the most crucial issues for the advancement of fifth-generation networks. The main objective of this study is to optimize the usage of resources for IoT networks. Firstly, the unmanned aerial vehicle is considered to be a base station for air-to-ground communications. Secondly, according to the distribution and fluctuation of signals; the IoT devices are categorized into urban and suburban clusters. This clustering helps to manage the environment easily. Thirdly, real data collection and preprocessing tasks are carried out. Fourthly, the deep reinforcement learning approach is proposed as a main system development scheme for resource management. Fifthly, K-means and round-robin scheduling algorithms are applied for clustering and managing the users’ resource requests, respectively. Then, the TensorFlow (python) programming tool is used to test the overall capability of the proposed method. Finally, this paper evaluates the proposed approach with related works based on different scenarios. According to the experimental findings, our proposed scheme shows promising outcomes. Moreover, on the evaluation tasks, the outcomes show rapid convergence, suitable for heterogeneous IoT networks, and low complexity.This publication has 29 references indexed in Scilit:
- User Association and Resource Allocation in Unified NOMA Enabled Heterogeneous Ultra Dense NetworksIEEE Communications Magazine, 2018
- Heterogeneous Ultradense Networks with NOMA: System Architecture, Coordination Framework, and Performance EvaluationIEEE Vehicular Technology Magazine, 2018
- Fractional Programming for Communication Systems—Part I: Power Control and BeamformingIEEE Transactions on Signal Processing, 2018
- Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless NetworksIEEE Transactions on Cognitive Communications and Networking, 2018
- Resource management for future mobile networks: Architecture and technologiesComputer Networks, 2017
- Internet of Things for Smart Healthcare: Technologies, Challenges, and OpportunitiesIEEE Access, 2017
- Computing, Caching, and Communication at the Edge: The Cornerstone for Building a Versatile 5G EcosystemIEEE Communications Magazine, 2017
- Resource Allocation for Energy Harvesting-Powered D2D Communication Underlaying UAV-Assisted NetworksIEEE Transactions on Green Communications and Networking, 2017
- Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning ApproachIEEE Transactions on Vehicular Technology, 2017
- Energy-Efficient UAV Communication With Trajectory OptimizationIEEE Transactions on Wireless Communications, 2017