Abstract
We consider a class of restless multi-armed bandit problems that arises in multi-channel opportunistic communications, where channels are modeled as independent and stochastically identical Gilbert-Elliot channels and channel state observations are subject to errors. We show that the myopic channel selection policy has a semi-universal structure that obviates the need to know the Markovian transition probabilities of the channel states. Based on this structure, we establish closed-form lower and upper bounds on the steady-state throughput achieved by the myopic policy. Furthermore, we characterize the approximation factor of the myopic policy to bound its worst-case performance loss with respect to the optimal performance.

This publication has 12 references indexed in Scilit: