Distributed Alternating Direction Method of Multipliers

Top Cited Papers
Open Access
Abstract
We consider a network of agents that are cooperatively solving a global unconstrained optimization problem, where the objective function is the sum of privately known local objective functions of the agents. Recent literature on distributed optimization methods for solving this problem focused on subgradient based methods, which typically converge at the rate O (1/√k), where k is the number of iterations. In this paper, k we introduce a new distributed optimization algorithm based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints. We show that this algorithm converges at the rate O (1/k).

This publication has 12 references indexed in Scilit: