Baidu driving dataset and end-to-end reactive control model

Abstract
End-to-end autonomous driving system has obtained great progress recently. In this paper, we will introduce our open source dataset: Baidu Driving Dataset(BDD), and our end-to-end reactive control model trained on BDD. The BDD comes from Baidu street view project, which generates millions of kilometers driving data every year. Among them, we publish 10000 kilometers driving data for end-to-end autonomous driving research. The BDD consists of two parts: forward images and vehicle motion attitude. The vehicle motion attitude is derived from real time kinematic GPS location data with standard deviation of 3 centimeters. Our reactive control model consists of lateral control and longitudinal control. We employ curvature instead of steering angle for lateral control, and leverage acceleration, not throttle or brake, for longitudinal control. CNN network is employed for lateral control model, mapping a single image from forward camera directly to corresponding curvature. For longitudinal control, stacked convolutional LSTM is used to extract spatial and temporal features from a sequence of frames, and to map the features with longitudinal control commands. The demo and data are in http://roadhackers.baidu.com. To the best of our knowledge, it is the first time that both lateral and longitudinal control are implemented in an end-to-end style.

This publication has 2 references indexed in Scilit: