2021
DOI: 10.1109/tsp.2021.3094906
|View full text |Cite
|
Sign up to set email alerts
|

On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps

Abstract: In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where the cost of communication and/or computation can be expensive. We extend and generalize the analysis for a class of nested gradientbased distributed algorithms (NEAR-DGD, [1]) to account for multiple gradient steps at every iteration. We show the effect of performing multiple gradient steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
references
References 59 publications
(136 reference statements)
0
0
0
Order By: Relevance