Transforming a 3-D LiDAR Point Cloud Into a 2-D Dense Depth Map Through a Parameter Self-Adaptive Framework
- 1 June 2016
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Intelligent Transportation Systems
- Vol. 18 (1), 165-176
- https://doi.org/10.1109/tits.2016.2564640
Abstract
The 3-D LiDAR scanner and the 2-D charge-coupled device (CCD) camera are two typical types of sensors for surrounding-environment perceiving in robotics or autonomous driving. Commonly, they are jointly used to improve perception accuracy by simultaneously recording the distances of surrounding objects, as well as the color and shape information. In this paper, we use the correspondence between a 3-D LiDAR scanner and a CCD camera to rearrange the captured LiDAR point cloud into a dense depth map, in which each 3-D point corresponds to a pixel at the same location in the RGB image. In this paper, we assume that the LiDAR scanner and the CCD camera are accurately calibrated and synchronized beforehand so that each 3-D LiDAR point cloud is aligned with its corresponding RGB image. Each frame of the LiDAR point cloud is then projected onto the RGB image plane to form a sparse depth map. Then, a self-adaptive method is proposed to upsample the sparse depth map into a dense depth map, in which the RGB image and the anisotropic diffusion tensor are exploited to guide upsampling by reinforcing the RGB-depth compactness. Finally, convex optimization is applied on the dense depth map for global enhancement. Experiments on the KITTI and Middlebury data sets demonstrate that the proposed method outperforms several other relevant state-of-the-art methods in terms of visual comparison and root-mean-square error measurement.Keywords
Funding Information
- National Natural Science Foundation of China (41401525, 61301277, 41371431)
- Guangdong Provincial Natural Science (2014A030313209)
- CRSRI Open Research Program (CKWV2014226/KY)
This publication has 27 references indexed in Scilit:
- A novel way to organize 3D LiDAR point cloud as 2D depth map height map and surface normal mapPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet ClassificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Sparse depth map upsampling with RGB image and anisotropic diffusion tensorPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive ModelIEEE Transactions on Image Processing, 2014
- Depth Map Upsampling via Compressive SensingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- Perceptual Organization and Recognition of Indoor Scenes from RGB-D ImagesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- Are we ready for autonomous driving? The KITTI vision benchmark suitePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- 3D Convolutional Neural Networks for Human Action RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence, 2012
- A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based PriorsIEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
- Evaluation of Cost Functions for Stereo MatchingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007