2013
DOI: 10.1007/978-3-642-34097-0_7
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Bregman-Distance Algorithms for Min-Max Optimization

Abstract: We consider a min-max optimization problem over a time-varying network of computational agents, where each agent in the network has its local convex cost function which is a private knowledge of the agent. The agents want to jointly minimize the maximum cost incurred by any agent in the network, while maintaining the privacy of their objective functions. To solve the problem, we consider subgradient algorithms where each agent computes its own estimates of an optimal point based on its own cost function, and i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
20
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 37 publications
1
20
0
Order By: Relevance
“…The work in this article is closely related to the previous works . Our algorithm is based on the initial discovery in Ref.…”
Section: Introductionmentioning
confidence: 96%
See 1 more Smart Citation
“…The work in this article is closely related to the previous works . Our algorithm is based on the initial discovery in Ref.…”
Section: Introductionmentioning
confidence: 96%
“…[ ], but we extend the mirror descent method to the distributed setting for solving multi‐agent saddle point optimization. The article proposed distributed Bregman distance algorithms for solving mini‐max problems, however, to make sure the convergence of the algorithm, their algorithm requires the doubly stochasticity of the weighted matrices in a undirected network while our algorithm only requires the row stochasticity of the weighted matrices over a directed network. In addition, our algorithm includes the algorithm DPDSG proposed in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…Literature review. There are many approaches to decentralized convex resource optimization for multiagent systems in the literature, for example, some are based on dual decomposition methods, eg, Srivastava et al for unconstrained or Xiao et al and Borst and Saniee for constrained problems or based on alternating direction method of multipliers, eg, Magnússon et al Other approaches are based on a combination of subgradients and consensus, a local version of the replicator equation, gossip algorithms, saddle‐point methods, or Laplacian gradient dynamics . However, these approaches are not proven to be robust since they assume no errors in communication or computations.…”
Section: Introductionmentioning
confidence: 99%
“…To handle optimization problems with constraints, projected subgradient methods have been adopted to a distributed manner [2], [4], [5], where the constraints are assumed to be common among the all agents. On the other hand, distributed primaldual subgradient algorithms [3] and primal-dual perturbation methods [8] can solve problems with uncommon constraints.…”
Section: Introductionmentioning
confidence: 99%