2021
DOI: 10.1109/tac.2020.3031018
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Rates of Distributed Gradient Methods Under Random Quantization: A Stochastic Approximation Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 44 publications
(45 citation statements)
references
References 36 publications
0
42
0
Order By: Relevance
“…algorithms; [15] employed biased but contractive compressors to design a decentralized SGD algorithm; [16] and [17], [18] utilized unbiased compressors to respectively design decentralized gradient descent and primal-dual algorithms; [19] and [20] made use of the standard uniform quantizer to respectively design decentralized subgradient methods and alternating direction method of multipliers appraoches; [21], [22] and [23] respectively adopted the unbiased random quantization and the adaptive quantization to design decentralized projected subgradient algorithms; [24] and [25]- [28] exploited the standard uniform quantizer with dynamic quantization level to respectively design decentralized subgradient and primal-dual algorithms; and [29] applied the standard uniform quantizer with a fixed quantization level to design a decentralized gradient descent algorithm. The compressors mentioned above can be unified into three general classes.…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…algorithms; [15] employed biased but contractive compressors to design a decentralized SGD algorithm; [16] and [17], [18] utilized unbiased compressors to respectively design decentralized gradient descent and primal-dual algorithms; [19] and [20] made use of the standard uniform quantizer to respectively design decentralized subgradient methods and alternating direction method of multipliers appraoches; [21], [22] and [23] respectively adopted the unbiased random quantization and the adaptive quantization to design decentralized projected subgradient algorithms; [24] and [25]- [28] exploited the standard uniform quantizer with dynamic quantization level to respectively design decentralized subgradient and primal-dual algorithms; and [29] applied the standard uniform quantizer with a fixed quantization level to design a decentralized gradient descent algorithm. The compressors mentioned above can be unified into three general classes.…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…In this scheme, the agents update locally their state variables by averaging the state variables received from their neighbors, and then follow the subgradient descent direction. More recently, additional convergence results were presented in [53], where random (dithered) quantization is applied, along with a weighting scheme to give more or less importance to the analog local state and the quantized averaged state of the neighbors. In [54], the randomized quantizers proposed in [41] are considered for a distributed gradient descent implementation using consensus with compressed iterates and an update rule similar to the one adopted in [53].…”
Section: A Distributed Quantizationmentioning
confidence: 99%
“…, N . To this end, we exploit the block decomposition of matrix V in (53). Regarding si , since v 1k = π k , from (160) we have that:…”
Section: Appendix B Proof Of Lemmamentioning
confidence: 99%
See 1 more Smart Citation
“…Other Settings. We also want to mention some related literature in game theory [39,40,41,42,43], two-time-scale stochastic approximation [44,45,46,47,48,49,50,51,52,53], reinforcement learning [54,55,56,57,58], two-time-scale optimization [59,60], and decentralized optimization [61,62,63,64,65,66,67]. These works study different variants of two-time-scale methods mostly for solving a single optimization problem, and often aim to find global optimality (or fixed points) using different structure of the underlying problems (e.g., Markov structure in stochastic games and reinforcement learning or strong monotonicity in stochastic approximation).…”
Section: Related Workmentioning
confidence: 99%