2022
DOI: 10.48550/arxiv.2206.02604
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

Abstract: In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms. Specifically, there are K clients whose individually chosen models are aggregated by a central server. The bounds depend on the compressibility of each client's algorithm while keeping other clients' algorithms un-compressed, and leverage the fact that small changes in each local model change the aggregated model by a factor of only 1{K. Adopting a re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Russo and Zou [29], Xu and Raginsky [30] provided a generalization upper bound with mutual information of the input and output. Sefidgaran et al [31,32] further related the mutual information with two other studies of generalization error, i.e., compressibility and fractal dimensions, providing a unifying framework for the three directions of studies.…”
Section: Source Generalization Study Of Ib and Dibmentioning
confidence: 99%
“…Russo and Zou [29], Xu and Raginsky [30] provided a generalization upper bound with mutual information of the input and output. Sefidgaran et al [31,32] further related the mutual information with two other studies of generalization error, i.e., compressibility and fractal dimensions, providing a unifying framework for the three directions of studies.…”
Section: Source Generalization Study Of Ib and Dibmentioning
confidence: 99%