Generation and validation of virtual point cloud data for automated driving systems

Abstract
The performance of an automated driving system is crucially affected by its environmental perception. The vehicle's perception of its environment provides the foundation for the automated responses computed by the system's logic algorithms. As perception relies on the vehicle's sensors, simulating sensor behavior in a virtual world constitutes virtual environmental perception. This is the task performed by sensor models. In this work, we introduce a real-time capable model of the measurement process for an automotive lidar sensor employing a ray tracing approach. The output of the model is point cloud data based on the geometry and material properties of the virtual scene. With this low level sensor data as input, a vehicle internal representation of the environment is constructed by means of an occupancy grid mapping algorithm. By using a virtual environment that has been constructed from high-fidelity measurements of a real world scenario, we are able to establish a direct link between real and virtual world sensor data. Directly comparing the resulting sensor output and environment representations from both cases, we are able to quantitatively explore the validity and fidelity of the proposed sensor measurement model.

This publication has 11 references indexed in Scilit: