Deep Reinforcement Learning based Resource Allocation in Low Latency Edge Computing Networks
- 1 August 2018
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
In this paper, we investigate strategies for the allocation of computational resources using deep reinforcement learning in mobile edge computing networks that operate with finite blocklength codes to support low latency communications. The end-to-end (E2E) reliability of the service is addressed, while both the delay violation probability and the decoding error probability are taken into account. By employing a deep reinforcement learning method, namely deep Q-learning, we design an intelligent agent at the edge computing node to develop a real-time adaptive policy for computational resource allocation for offloaded tasks of multiple users in order to improve the average E2E reliability. Via simulations, we show that under different task arrival rates, the realized policy serves to increase the task number that decreases the delay violation rate while guaranteeing an acceptable level of decoding error probability. Moreover, we show that the proposed deep reinforcement learning approach outperforms the random and equal scheduling benchmarks.Keywords
This publication has 11 references indexed in Scilit:
- Optimal Scheduling of Reliability-Constrained Relaying System Under Outdated CSI in the Finite Blocklength RegimeIEEE Transactions on Vehicular Technology, 2018
- Latency and Reliability-Aware Task Offloading and Resource Allocation for Mobile Edge ComputingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- A Survey on the Edge Computing for the Internet of ThingsIEEE Access, 2017
- Mobile Edge Computing: A SurveyIEEE Internet of Things Journal, 2017
- Power-Delay Tradeoff in Multi-User Mobile-Edge Computing SystemsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Wireless Throughput and Energy Efficiency With Random Arrivals and Statistical Queuing ConstraintsIEEE Transactions on Information Theory, 2015
- DREAM: Dynamic Resource and Task Allocation for Energy Minimization in Mobile Cloud SystemsIEEE Journal on Selected Areas in Communications, 2015
- Human-level control through deep reinforcement learningNature, 2015
- Fog computing and its role in the internet of thingsPublished by Association for Computing Machinery (ACM) ,2012
- Channel Coding Rate in the Finite Blocklength RegimeIEEE Transactions on Information Theory, 2010