Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving
- 28 May 2021
- journal article
- research article
- Published by Zhejiang University Press in Frontiers of Information Technology & Electronic Engineering
- Vol. 22 (5), 673-686
- https://doi.org/10.1631/fitee.1900637
Abstract
Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.Keywords
This publication has 24 references indexed in Scilit:
- Model-free Deep Reinforcement Learning for Urban Autonomous DrivingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2019
- Pre-training with non-expert human demonstration for deep reinforcement learningThe Knowledge Engineering Review, 2019
- Deep Reinforcement Learning Based High-level Driving Behavior Decision-making Model in Heterogeneous TrafficPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2019
- Signal Progression Model for Long Arterial: Intersection Grouping and CoordinationIEEE Access, 2018
- End-to-End Driving Via Conditional Imitation LearningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2018
- Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning ApproachIEEE Transactions on Vehicular Technology, 2017
- Traffic signal timing via deep reinforcement learningIEEE/CAA Journal of Automatica Sinica, 2016
- DeepDriving: Learning Affordance for Direct Perception in Autonomous DrivingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- A Review of Motion Planning Techniques for Automated VehiclesIEEE Transactions on Intelligent Transportation Systems, 2015
- Convolutional neural networks at constrained time costPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015