2022
DOI: 10.48550/arxiv.2211.00533
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks

Abstract: Decentralized optimization with time-varying networks is an emerging paradigm in machine learning. It saves remarkable communication overhead in large-scale deep training and is more robust in wireless scenarios especially when nodes are moving. Federated learning can also be regarded as decentralized optimization with time-varying communication patterns alternating between global averaging and local updates.While numerous studies exist to clarify its theoretical limits and develop efficient algorithms, it rem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…3, by enhancing the vanilla Push-DIGing. Inspired by the algorithm development in [36,37], we add two additional components to Push-DIGing: gradient accumulation and multiple-gossip communication. We call the new algorithm as MG-Push-DIGing where "MG" indicates "multiple gossips".…”
Section: Mg-push-diging Algorithmmentioning
confidence: 99%
“…3, by enhancing the vanilla Push-DIGing. Inspired by the algorithm development in [36,37], we add two additional components to Push-DIGing: gradient accumulation and multiple-gossip communication. We call the new algorithm as MG-Push-DIGing where "MG" indicates "multiple gossips".…”
Section: Mg-push-diging Algorithmmentioning
confidence: 99%