2022
DOI: 10.48550/arxiv.2205.00647
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Maximal Dissent: a Fast way to Agree in Distributed Convex Optimization

Abstract: Consider a set of agents collaboratively solving a distributed convex optimization problem, asynchronously, under stringent communication constraints. In such situations, when an agent is activated and is allowed to communicate with only one of its neighbors, we would like to pick the one holding the most informative local estimate. We propose new algorithms where the agents with maximal dissent average their estimates, leading to an information mixing mechanism that often displays faster convergence to an opt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…The following theorem from [24] is a consequence of the Robbins-Siegmund's Theorem and plays a crucial role in the proof of Theorem 1.…”
Section: The Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…The following theorem from [24] is a consequence of the Robbins-Siegmund's Theorem and plays a crucial role in the proof of Theorem 1.…”
Section: The Preliminariesmentioning
confidence: 99%
“…Theorem 3 [24,Lemma 3] Let the optimal set X = arg min x∈R d f (x) be nonempty for a convex and continuous function f : R d → R. Moreover, assume {y(t)} is a sequence satisfying…”
Section: The Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation