Abstract
We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a frame- work of Bramson (Queueing Systems Theory Appl. 30 (1998) 89-148) and Williams (Queueing Systems Theory Appl. 30 (1998b) 5-25) from the multi- class queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks. 1. Introduction. This paper is a continuation of Dai and Lin (2005), in which maximum pressure policies are shown to be throughput optimal for a class of sto- chastic processing networks. Throughput optimality is an important, first-order objective for many networks, but it ignores some key secondary performance mea- sures like queueing delays experienced by jobs in these networks. In this paper we show that maximum pressure policies enjoy additional optimality properties; they are asymptotically optimal in minimizing a certain workload or holding cost of a stochastic processing network. Stochastic processing networks have been introduced in a series of three papers by Harrison (2000, 2002, 2003). In Dai and Lin (2005) and this paper we consider a special class of Harrison's model. This class of stochastic processing networks is much more general than multiclass queueing networks that have been a subject of intensive study in the last 20 years; see, for example, Harrison (1988), Williams