2020
DOI: 10.1109/tac.2019.2937499
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Multiagent Convex Optimization Over Random Digraphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 47 publications
0
13
0
Order By: Relevance
“…It can be then asserted that the operators F i 's are all nonexpansive [19]. As a result, it is easy to verify that problem ( 13) is equivalent to finding fixed points of the operator F := ∑ N i=1 F i /N, which is exactly the same as (2).…”
Section: Distributed Gradient Descent Algorithmmentioning
confidence: 97%
See 3 more Smart Citations
“…It can be then asserted that the operators F i 's are all nonexpansive [19]. As a result, it is easy to verify that problem ( 13) is equivalent to finding fixed points of the operator F := ∑ N i=1 F i /N, which is exactly the same as (2).…”
Section: Distributed Gradient Descent Algorithmmentioning
confidence: 97%
“…For problem (2), if there exists a global computing unit who knows the exact information F, then one famous centralized algorithm, called the KM iteration [8,28], can be used…”
Section: The D-km Iterationmentioning
confidence: 99%
See 2 more Smart Citations
“…Example 1.1 (Asynchronous distributed optimization). The line of research into stochastic operators that perform updates randomly has found fruitful application in distributed optimization, where a set of agents cooperate towards the solution of a problem, but the updates may be performed asynchronously and with different precision [9,36,1,3]. Consider a distributed system with N agents, where each agent has a local cost function f i : R → R. Assuming that agents are connected through a communication network, one may be interested in developing a distributed algorithm for solving the following optimization problem:…”
Section: Introductionmentioning
confidence: 99%