Probabilistic decision-making under uncertainty for autonomous driving using continuous POMDPs

Abstract
This paper presents a generic approach for tactical decision-making under uncertainty in the context of driving. The complexity of this task mainly stems from the fact that rational decision-making in this context must consider several sources of uncertainty: The temporal evolution of situations cannot be predicted without uncertainty because other road users behave stochastically and their goals and plans cannot be measured. Even more important, road users are only able to perceive a tiny part of the current situation with their sensors because measurements are noisy and most of the environment is occluded. In order to anticipate the consequences of decisions a probabilistic approach, considering both forms of uncertainty, is necessary. We address this by formulating the task of driving as a continuous Partially Observable Markov Decision Process (POMDP) that can be automatically optimized for different scenarios. As driving is a continuous-space problem, the belief space is infinite-dimensional. We do not use a symbolic representation or discretize the state space a priori because there is no representation of the state space that is optimal for every situation. Instead, we employ a continuous POMDP solver that learns a good representation of the specific situation.

This publication has 14 references indexed in Scilit: