Controllability, observability and discrete-time markovian jump linear quadratic control

Abstract
This paper is concerned with the controllability and observability of discrete-time linear systems that possess randomly jumping parameters described by finite-stale Markov processes, and the relationship between these properties and the solution of the infinite time jump linear quadratic (JLQ) optimal control problem. The solution of the markovian JLQ problem with finite or infinite time horizons is known. Necessary and sufficient conditions for the existence of optimal constant control laws that lead to finite optimal expected costs as the time horizon becomes infinite are also known. Sufficient conditions for these steady-state control laws to stabilize the controlled system are also available (Chizeck et al. 1986). These conditions are not easy to test, however. Various definitions of controllability and observability for stochastic systems exist in the literature. These definitions are unfortunately not related to the steady-state JLQ control problem in a manner that is analogous to the role of deterministic controllability and observability in the linear quadratic optimal control problem. In this paper, new and refined definitions of the controllability and observability of jump linear systems are developed. These conditions have relatively simple algebraic tests. More importantly, these controllability and observability conditions can be used to determine the existence of finite steady-state JLQ solutions.

This publication has 4 references indexed in Scilit: