2017
DOI: 10.1109/tac.2017.2662019
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous Multiagent Primal-Dual Optimization

Abstract: We present a framework for asynchronously solving convex optimization problems over networks of agents which are augmented by the presence of a centralized cloud computer. This framework uses a Tikhonov-regularized primal-dual approach in which the agents update the system's primal variables and the cloud updates its dual variables. To minimize coordination requirements placed upon the system, the times of communications and computations among the agents are allowed to be arbitrary, provided they satisfy mild … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 39 publications
(28 citation statements)
references
References 22 publications
(30 reference statements)
0
28
0
Order By: Relevance
“…A saddle-point method for distributed, continuous-time, online optimization is proposed in [127]. An asynchronous, primal-dual, cloud-based algorithm for distributed convex optimization is provided in [128]. An asynchronous algorithm which allows the presence of local nonconvex constraints is presented in [129].…”
Section: Discussion and Referencesmentioning
confidence: 99%
“…A saddle-point method for distributed, continuous-time, online optimization is proposed in [127]. An asynchronous, primal-dual, cloud-based algorithm for distributed convex optimization is provided in [128]. An asynchronous algorithm which allows the presence of local nonconvex constraints is presented in [129].…”
Section: Discussion and Referencesmentioning
confidence: 99%
“…The convergence of Algorithm 1 will be measured using a block-maximum norm as in [25], [14], and [24].…”
Section: A Block-maximum Normsmentioning
confidence: 99%
“…Work in [17] was expanded upon in [19], where it was shown that a fixed Tikhonov regularization implies the existence of the nested sets required in [17] for asymptotic convergence. However, developments in [19] require every agent to apply the same regularization, which can be difficult to enforce and verify in practice, especially in large decentralized networks. Moreover, convergence in [19] is measured with respect to the same un-weighted norm for all agents.…”
Section: Introductionmentioning
confidence: 99%
“…However, developments in [19] require every agent to apply the same regularization, which can be difficult to enforce and verify in practice, especially in large decentralized networks. Moreover, convergence in [19] is measured with respect to the same un-weighted norm for all agents. There is a wide variety of statistical and machine learning problems which must be normalized due to disparate numerical scales across potentially many orders of magnitude [20], and which may require measuring convergence of different components in different norms.…”
Section: Introductionmentioning
confidence: 99%