2016
DOI: 10.1002/cplx.21794
|View full text |Cite
|
Sign up to set email alerts
|

Distributed mirror descent method for saddle point problems over directed graphs

Abstract: In this article, we consider a mini‐max multi‐agent optimization problem where multiple agents cooperatively optimize a sum of local convex–concave functions, each of which is available to one specific agent in a network. To solve the problem, we propose a distributed optimization method by extending classical mirror descent algorithms to the distributed setting. We obtain the convergence of the algorithm under wild conditions that the agent communication follows a directed graph and the related weighted matri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Of particular relevance to this work is [28], where decentralized mirror descent has been developed for when agents receive the gradients with a delay. More recently, the application of mirror descent to saddle point problems is studied in [29]. Moreover, Rabbat in [30] proposes a decentralized mirror descent for stochastic composite optimization problems and provide guarantees for strongly convex regularizers.…”
Section: A Related Literaturementioning
confidence: 99%
“…Of particular relevance to this work is [28], where decentralized mirror descent has been developed for when agents receive the gradients with a delay. More recently, the application of mirror descent to saddle point problems is studied in [29]. Moreover, Rabbat in [30] proposes a decentralized mirror descent for stochastic composite optimization problems and provide guarantees for strongly convex regularizers.…”
Section: A Related Literaturementioning
confidence: 99%
“…In general, there are two ways to choose such that it satisfies the requirement. One way is by solving the unconstrained optimization problem defined in (12), and the other one is by heuristic. In order to imitate the time-varying weight matrix, a pool of 50 weight matrices from connected random graphs are generated, in which each weight matrix is satisfied (Assumption 1).…”
Section: Numerical Simulationsmentioning
confidence: 99%
“…In [6], Nedic et al generalized the distributed method in [5] to solve constrained convex optimization. Later, many researchers proposed various extensions based on primal (sub)gradient-based methods in, for example, [7][8][9][10][11][12]. In [13], Duchi et al extended the centralized dual averaging algorithm to the distributed setting and then proposed a distributed dual averaging algorithm.…”
Section: Introductionmentioning
confidence: 99%