Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks
Top Cited Papers
- 14 August 2019
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Vehicular Technology
- Vol. 68 (11), 11158-11168
- https://doi.org/10.1109/tvt.2019.2935450
Abstract
Mobile Edge Computing (MEC) is a promising technology to extend the diverse services to the edge of Internet of Things (IoT) system. However, the static edge server deployment may cause “service hole” in IoT networks in which the location and service requests of the User Equipments (UEs) may be dynamically changing. In this paper, we firstly explore a vehicle edge computing network architecture in which the vehicles can act as the mobile edge servers to provide computation services for nearby UEs. Then, we propose as vehicle-assisted offloading scheme for UEs while considering the delay of the computation task. Accordingly, an optimization problem is formulated to maximize the long-term utility of the vehicle edge computing network. Considering the stochastic vehicle traffic, dynamic computation requests and time-varying communication conditions, the problem is further formulated as a semi-Markov process and two reinforcement learning methods: Q-learning based method and deep reinforcement learning (DRL) method, are proposed to obtain the optimal policies of computation offloading and resource allocation. Finally, we analyze the effectiveness of the proposed scheme in the vehicular edge computing network by giving numerical results.Keywords
Funding Information
- National Natural Science Foundation of China (61773126, 61727810, 61701125, 61603099, 61973087)
- Pearl River S and T Nova Program of Guangzhou (201806010176)
- The European Unions Horizon 2020 research and innovation programme (824019)
This publication has 31 references indexed in Scilit:
- Mobile Edge Computing: A SurveyIEEE Internet of Things Journal, 2017
- A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge ComputingIEEE Communications Magazine, 2017
- Computation Offloading and Resource Allocation in Wireless Cellular Networks With Mobile Edge ComputingIEEE Transactions on Wireless Communications, 2017
- Energy-Efficient Resource Allocation for Mobile-Edge Computation OffloadingIEEE Transactions on Wireless Communications, 2016
- Reinforcement learning for resource provisioning in the vehicular cloudIEEE Wireless Communications, 2016
- On the Serviceability of Mobile Vehicular Cloudlets in a Large-Scale Urban EnvironmentIEEE Transactions on Intelligent Transportation Systems, 2016
- Understanding the IoT connectivity landscape: a contemporary M2M radio technology roadmapIEEE Communications Magazine, 2015
- Human-level control through deep reinforcement learningNature, 2015
- Design of a Scalable Hybrid MAC Protocol for Heterogeneous M2M NetworksIEEE Internet of Things Journal, 2014
- Internet of Things in Industries: A SurveyIEEE Transactions on Industrial Informatics, 2014