Reinforcement learning for resource provisioning in the vehicular cloud

Abstract
This article presents a concise view of vehicular clouds that incorporates various vehicular cloud models that have been proposed to date. Essentially, they all extend the traditional cloud and its utility computing functionalities across the entities in the vehicular ad hoc network. These entities include fixed roadside units, onboard units embedded in the vehicle, and personal smart devices of drivers and passengers. Cumulatively, these entities yield abundant processing, storage, sensing, and communication resources. However, vehicular clouds require novel resource provisioning techniques that can address the intrinsic challenges of dynamic demands for the resources and stringent QoS requirements. In this article, we show the benefits of reinforcement-learning-based techniques for resource provisioning in the vehicular cloud. The learning techniques can perceive long-term benefits and are ideal for minimizing the overhead of resource provisioning for vehicular clouds.

This publication has 14 references indexed in Scilit: