Transforming a 3-D LiDAR Point Cloud Into a 2-D Dense Depth Map Through a Parameter Self-Adaptive Framework

Abstract
The 3-D LiDAR scanner and the 2-D charge-coupled device (CCD) camera are two typical types of sensors for surrounding-environment perceiving in robotics or autonomous driving. Commonly, they are jointly used to improve perception accuracy by simultaneously recording the distances of surrounding objects, as well as the color and shape information. In this paper, we use the correspondence between a 3-D LiDAR scanner and a CCD camera to rearrange the captured LiDAR point cloud into a dense depth map, in which each 3-D point corresponds to a pixel at the same location in the RGB image. In this paper, we assume that the LiDAR scanner and the CCD camera are accurately calibrated and synchronized beforehand so that each 3-D LiDAR point cloud is aligned with its corresponding RGB image. Each frame of the LiDAR point cloud is then projected onto the RGB image plane to form a sparse depth map. Then, a self-adaptive method is proposed to upsample the sparse depth map into a dense depth map, in which the RGB image and the anisotropic diffusion tensor are exploited to guide upsampling by reinforcing the RGB-depth compactness. Finally, convex optimization is applied on the dense depth map for global enhancement. Experiments on the KITTI and Middlebury data sets demonstrate that the proposed method outperforms several other relevant state-of-the-art methods in terms of visual comparison and root-mean-square error measurement.
Funding Information
  • National Natural Science Foundation of China (41401525, 61301277, 41371431)
  • Guangdong Provincial Natural Science (2014A030313209)
  • CRSRI Open Research Program (CKWV2014226/KY)

This publication has 27 references indexed in Scilit: