2021
DOI: 10.48550/arxiv.2110.05282
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Gradient Tracking for Decentralized Optimization

Abstract: In this paper, we focus on solving the decentralized optimization problem of minimizing the sum of n objective functions over a multi-agent network. The agents are embedded in an undirected graph where they can only send/receive information directly to/from their immediate neighbors. Assuming smooth and strongly convex objective functions, we propose an Optimal Gradient Tracking (OGT) method that achieves the optimal gradient computation complexity O √ κ log 1 ǫ and the optimal communication complexity O

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…The condition number of P K (W) equals O(1) due to the specific structure of the polynomial (see [57]). Moreover, loopless Chebyshev acceleration proposed in [59] is achieved without explicit multi-step consensus.…”
Section: Multi-step Consensusmentioning
confidence: 99%
“…The condition number of P K (W) equals O(1) due to the specific structure of the polynomial (see [57]). Moreover, loopless Chebyshev acceleration proposed in [59] is achieved without explicit multi-step consensus.…”
Section: Multi-step Consensusmentioning
confidence: 99%