2023
DOI: 10.21203/rs.3.rs-3364332/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Data-Free Knowledge Distillation via Generator-Free Data Generation for Non-IID Federated Learning

Siran Zhao,
Tianchi Liao,
Lele Fu
et al.

Abstract: Data heterogeneity (Non-IID) on Federated Learning (FL) is currently a widely publicized problem, which leads to local model drift and performance degradation. Because of the advantage of knowledge distillation, it has been explored in some recent work to refine global models. However, these approaches rely on a proxy dataset or a data generator. First, in many FL scenarios, proxy dataset do not necessarily exist on the server. Second, the quality of data generated by the generator is unstable and the generato… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 37 publications
0
14
0
Order By: Relevance
“…To improve the performance of FL, existing methods can be classified into two categories, i.e., homogeneous methods and heterogeneous methods. Homogeneous FL methods [5], [14]- [16] still use the same model as the global model for local training. This goal aims to use a wisely model training mechanism [14], [16], device selection mechanism [17]- [20], or a data processing mechanism [21]- [23] to improve the inference performance of the global model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To improve the performance of FL, existing methods can be classified into two categories, i.e., homogeneous methods and heterogeneous methods. Homogeneous FL methods [5], [14]- [16] still use the same model as the global model for local training. This goal aims to use a wisely model training mechanism [14], [16], device selection mechanism [17]- [20], or a data processing mechanism [21]- [23] to improve the inference performance of the global model.…”
Section: Introductionmentioning
confidence: 99%
“…Homogeneous FL methods [5], [14]- [16] still use the same model as the global model for local training. This goal aims to use a wisely model training mechanism [14], [16], device selection mechanism [17]- [20], or a data processing mechanism [21]- [23] to improve the inference performance of the global model. Although homogeneous FL methods can alleviate performance degradation caused by non-IID data, their performance is still limited due to existing lowperformance devices.…”
Section: Introductionmentioning
confidence: 99%
“…The MAP estimators θ𝓁 and the matrices Â𝓁 are found analytically or numerically, depending on the form of the likelihood functions in the local centers. The matrix  is positive definite by definition, so its approximation in (7) as well (if the approximation is sufficiently accurate) and, thus, is invertible.…”
Section: Bayesian Federated Inferencementioning
confidence: 99%
“…The formulae in (7) do not say anything about the plausibility of the subsets describing similar subpopulations. However, once we have computed the estimates ( θ, Â) we should find that θ is compatible with each 'local' estimate θ𝓁 , given the error bars coded in the matrices  and Â𝓁 .…”
Section: Bayesian Federated Inferencementioning
confidence: 99%
See 1 more Smart Citation