2021
DOI: 10.1109/tsp.2021.3083981
|View full text |Cite
|
Sign up to set email alerts
|

FlexPD: A Flexible Framework of First-Order Primal-Dual Algorithms for Distributed Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 41 publications
0
11
0
Order By: Relevance
“…To achieve fast convergence, accelerated distributed gradient descent algorithms were presented in [38][39][40][41][42]. Meanwhile, the distributed primal-dual algorithms were also developed in [43]. Moreover, Newton's algorithms were developed in [44,45], and quasi-Newton methods were provided in [46].…”
Section: Dmentioning
confidence: 99%
“…To achieve fast convergence, accelerated distributed gradient descent algorithms were presented in [38][39][40][41][42]. Meanwhile, the distributed primal-dual algorithms were also developed in [43]. Moreover, Newton's algorithms were developed in [44,45], and quasi-Newton methods were provided in [46].…”
Section: Dmentioning
confidence: 99%
“…However, the minimization of a local objective function at each iteration is required for distributed ADMM methods, which leads to computational complexity and might break the balance between computation costs and performance. This bottleneck is overcome by References 25,26 through the use of a multi‐step communication strategy that performs multiple gradient descent steps and a dual step per iteration. By adjusting the number of gradient descent steps during each iteration, the algorithms in References 25,26 achieve a balance for computation or communication costs and performance tradeoffs.…”
Section: Introductionmentioning
confidence: 99%
“…This bottleneck is overcome by References 25,26 through the use of a multi‐step communication strategy that performs multiple gradient descent steps and a dual step per iteration. By adjusting the number of gradient descent steps during each iteration, the algorithms in References 25,26 achieve a balance for computation or communication costs and performance tradeoffs. It is worth noting that the multi‐step communication strategy proposed in the literature 25,26 is different from multi‐step distributed online learning 27 .…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations