2021
DOI: 10.48550/arxiv.2110.06992
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergence Rates of Decentralized Gradient Methods over Cluster Networks

Abstract: We present an analysis for the performance of decentralized consensus-based gradient (DCG) methods for solving optimization problems over a cluster network of nodes. This type of network is composed of a number of densely connected clusters with a sparse connection between them. Decentralized algorithms over cluster networks have been observed to constitute two-time-scale dynamics, where information within any cluster is mixed much faster than the one across clusters. Based on this observation, we present a no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 11 publications
(27 reference statements)
0
2
0
Order By: Relevance
“…Proper choices of these constants will also help us to derive the convergence rates of (2). Similar approach has been used in different settings of two-time-scale methods, see for example [53,66]. We conclude this section by introducing two assumptions for our analysis studied later.…”
Section: Related Workmentioning
confidence: 96%
See 1 more Smart Citation
“…Proper choices of these constants will also help us to derive the convergence rates of (2). Similar approach has been used in different settings of two-time-scale methods, see for example [53,66]. We conclude this section by introducing two assumptions for our analysis studied later.…”
Section: Related Workmentioning
confidence: 96%
“…Other Settings. We also want to mention some related literature in game theory [39,40,41,42,43], two-time-scale stochastic approximation [44,45,46,47,48,49,50,51,52,53], reinforcement learning [54,55,56,57,58], two-time-scale optimization [59,60], and decentralized optimization [61,62,63,64,65,66,67]. These works study different variants of two-time-scale methods mostly for solving a single optimization problem, and often aim to find global optimality (or fixed points) using different structure of the underlying problems (e.g., Markov structure in stochastic games and reinforcement learning or strong monotonicity in stochastic approximation).…”
Section: Related Workmentioning
confidence: 99%