Methods to Detect Road Features for Video-Based In-Vehicle Navigation Systems

Abstract
Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.

This publication has 17 references indexed in Scilit: