2019
DOI: 10.1109/tac.2018.2880407
|View full text |Cite
|
Sign up to set email alerts
|

Balancing Communication and Computation in Distributed Optimization

Abstract: Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in the network requires some form of information exchange with its neighboring nodes (communication) a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
102
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 83 publications
(102 citation statements)
references
References 65 publications
(88 reference statements)
0
102
0
Order By: Relevance
“…We can now begin to derive the convergence properties of the stochastic NEAR-DGD t method. We will closely follow the analysis in [9]. We start by proving that the magnitude of the system-wide stochastic NEAR-DGD t iterates is upper bounded in expectation.…”
Section: B Main Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We can now begin to derive the convergence properties of the stochastic NEAR-DGD t method. We will closely follow the analysis in [9]. We start by proving that the magnitude of the system-wide stochastic NEAR-DGD t iterates is upper bounded in expectation.…”
Section: B Main Resultsmentioning
confidence: 99%
“…We will now briefly summarize the NEAR-DGD method first published in [9]. Each iteration of NEAR-DGD is composed of a number of successive consensus steps, during which agents communicate with their neighbors, followed by a gradient step executed locally.…”
Section: A the Near-dgd Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…., x n ] ∈ R n , and Ax = 0 represents all equality constraints. Some choices for matrix A include edge-node incidence matrix [4], weighted incidence matrix [45], graph Laplacian matrix [41], and weighted Laplacian matrix [23,1]. In this paper, we choose matrix A to be the edge-node incidence matrix of the network graph, i.e., A ∈ R ×n , = |E|, whose null space is spanned by the vector of all ones.…”
Section: Our Contributionsmentioning
confidence: 99%
“…We simulate FlexPD-C algorithm and use its theoretical bounds for stepsize. The objective function at each agent i is of the form f i (x) = c i (x i − b i ) 2 with c i and b i being integers that are randomly chosen from [1, 10 3 ] and [1,100]. We run the simulation for 1000 random seeds.…”
Section: Numerical Experimentsmentioning
confidence: 99%