2022
DOI: 10.1109/tit.2022.3161620
|View full text |Cite
|
Sign up to set email alerts
|

vqSGD: Vector Quantized Stochastic Gradient Descent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 19 publications
0
29
0
Order By: Relevance
“…non-subtractive dither quantization. In some literature, see [9], the dimensionality-reduction is performed on the whole gradient vector. More naively, constraints in the communication capabilities between the remote users and the PS can also be addressed by restricting the number of communication iterations between gradient updates [13], [14].…”
Section: A Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…non-subtractive dither quantization. In some literature, see [9], the dimensionality-reduction is performed on the whole gradient vector. More naively, constraints in the communication capabilities between the remote users and the PS can also be addressed by restricting the number of communication iterations between gradient updates [13], [14].…”
Section: A Literature Reviewmentioning
confidence: 99%
“…Another approach for gradient compression is uniform quantization with non-subtractive dithering [9], [23]- [25], for unbiased quantization. This approach finds its theoretical foundations in works such as [26, eq.…”
Section: B Gradient Distortionmentioning
confidence: 99%
“…distributed SGD and FL systems, e.g., sketches, can achieve provable privacy benefits [125], [180]. Therefore, a novel sketch-based framework (DiffSketch) for distributed learning has been proposed, improving absolute test accuracy while offering a certain privacy guarantees and communication compression.…”
Section: • Model Compression Model Compression Techniques Formentioning
confidence: 99%
“…Therefore, a novel sketch-based framework (DiffSketch) for distributed learning has been proposed, improving absolute test accuracy while offering a certain privacy guarantees and communication compression. Moreover, the work in [180] has presented a family of vector quantization schemes, termed Vector-Quantized Stochastic Gradient Descent (VQSGD), which provides an asymptotic reduction in the communication cost and automatic privacy guarantees. • Encryption.…”
Section: • Model Compression Model Compression Techniques Formentioning
confidence: 99%
“…Sparsification methods highly rely on fixed or variable rate elimination of the dimensions of the gradient vector based on a specific criterion such as magnitude or variance [2]- [5]. This is while quantization methods focus on discretizing the gradient vectors through dimension-wise [6] or vector quantization [7]. Data privacy in FL model has been mainly addressed through DP as a context-free notion evaluating the privacy loss incurred by membership attacks to extract information about the individual sample points [8].…”
mentioning
confidence: 99%