Abstract
We treat an interesting class of "distributed" recursive stochastic algorithms (of the stochastic approximation type) that arises when parallel processing methods are used for the Monte Carlo optimization of systems, as well as in applications such as decentralized and asynchronous on-line optimization of the flows in communication networks. They arise generally in applications where different (noisy) processors control different components of the system state variable, and the processors compute and communicate in an asynchronous way. Since the computation and communication times are random (data and noise dependent) and asynchronous, there is no "iterate number" that is a common index for all the processors. This causes much of the analytical difficulty, and one must use elapsed processing time (the very natural alternative) rather than iterate number as the process parameter. The asymptotic properties (as the system "gain" goes to zero) are analyzed under conditions of both exogeneous noise and state dependent noise, and computation times. Weak convergence methods provide the main analytical tools. Suitable normalized sequences of iterates are shown to converge to the solution to either an ordinary or stochastic differential equation, and the asymptotic properties (as t->co and system gain->0) are obtained. A numerical comparison is made between the asymptotic normalized errors for a classical stochastic approximation (normalized errors in terms of elapsed processing time) and that for decentralized cases. This clearly illustrates the nature of the improvement due to the parallel processing

This publication has 4 references indexed in Scilit: