2019 IEEE 58th Conference on Decision and Control (CDC) 2019
DOI: 10.1109/cdc40024.2019.9029902
|View full text |Cite
|
Sign up to set email alerts
|

A General Framework of Exact Primal-Dual First-Order Algorithms for Distributed Optimization

Abstract: In this paper, we study the problem of minimizing a sum of convex objective functions, each of which is locally available to an agent in the network. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. Lagrangian-based distributed optimization algorithms have received significant attention in recent years, due to their exact convergence property. However, many of these algorithms have slow convergen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 54 publications
0
16
0
Order By: Relevance
“…Reference [38] considers several deterministic and randomized variants to solve (16) inexactly in an iterative fashion, including gradient-like and Jacobi-like primal updates. Reference [66] considers gradient-like primal updates and shows by simulation that performing a few inner iterations (2-4, more precisely) usually improves performance over the single inner iteration round-methods like EXTRA [34].…”
Section: B Consensus Optimizationmentioning
confidence: 99%
“…Reference [38] considers several deterministic and randomized variants to solve (16) inexactly in an iterative fashion, including gradient-like and Jacobi-like primal updates. Reference [66] considers gradient-like primal updates and shows by simulation that performing a few inner iterations (2-4, more precisely) usually improves performance over the single inner iteration round-methods like EXTRA [34].…”
Section: B Consensus Optimizationmentioning
confidence: 99%
“…However, the minimization of a local objective function at each iteration is required for distributed ADMM methods, which leads to computational complexity and might break the balance between computation costs and performance. This bottleneck is overcome by References 25,26 through the use of a multi‐step communication strategy that performs multiple gradient descent steps and a dual step per iteration. By adjusting the number of gradient descent steps during each iteration, the algorithms in References 25,26 achieve a balance for computation or communication costs and performance tradeoffs.…”
Section: Introductionmentioning
confidence: 99%
“…This bottleneck is overcome by References 25,26 through the use of a multi‐step communication strategy that performs multiple gradient descent steps and a dual step per iteration. By adjusting the number of gradient descent steps during each iteration, the algorithms in References 25,26 achieve a balance for computation or communication costs and performance tradeoffs. It is worth noting that the multi‐step communication strategy proposed in the literature 25,26 is different from multi‐step distributed online learning 27 .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our focus is on the design of distributed algorithms for Problem (P) that provably converge at a linear rate. When G = 0, several distributed schemes have been proposed in the literature enjoying such a property; examples include EXTRA [1], AugDGM [2], NEXT [3], SONATA [4], [5], DIGing [6], NIDS [7], Exact Diffusion [8], MSDA [9], and the distributed algorithms in [10], [11], and [12]. When G = 0 results are scarce; to our knowledge, the only two schemes available in the literature achieving linear rate for (P) are SONATA [5] and the distributed proximal gradient algorithm [13].…”
Section: Introductionmentioning
confidence: 99%