Distributed iterative regularization algorithms for monotone Nash games

Abstract
In this paper, we consider the development of single-timescale schemes for the distributed computation of Nash equilibria. In general, equilibria associated with convex Nash games over continuous strategy sets are wholly captured by the solution set of a variational inequality. Our focus is on Nash games whose equilibrium conditions are given by monotone variational inequalities, a class referred to as monotone Nash games. Unless suitably strong assumptions (such as strong monotonicity) are imposed on the mapping corresponding to the variational inequality, distributed schemes for computing equilibria often require the solution of a sequence of regularized problems, each of which has a unique solution. Such schemes operate on two timescales and are generally harder to implement in online settings. Motivated by this shortcoming, this work focuses on the development of three single timescale iterative regularization schemes that require precisely one projection step at every iteration. The first is an iterative Tikhonov regularization scheme while the second is an analogously constructed iterative proximal-point method. Both schemes are characterized by the property that the regularization/centering parameter are updated after every iteration, rather than when one has an approximate solution to the regularized problem. Finally, a modified form of the proximal-point scheme is also presented where the weight on the proximal term is updated as well.

This publication has 6 references indexed in Scilit: