2020
DOI: 10.48550/arxiv.2011.03206
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Resource-Constrained Federated Learning with Heterogeneous Labels and Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…The reason for this is that users' personalized data is usually insufficient. The work in [74] identifies a new challenge in FL design, which is called label heterogeneity. Concretely, each FL device has its own definition of data labels and learning model, which is independent of the definitions in other devices and the central server.…”
Section: Personalized Flmentioning
confidence: 99%
See 1 more Smart Citation
“…The reason for this is that users' personalized data is usually insufficient. The work in [74] identifies a new challenge in FL design, which is called label heterogeneity. Concretely, each FL device has its own definition of data labels and learning model, which is independent of the definitions in other devices and the central server.…”
Section: Personalized Flmentioning
confidence: 99%
“…For example, the blood type is labeled as bloodtype in the device A; however, which is labeled as bltype in the device B. This new challenge is solved in [74] by devising an 𝛼-weighted update. Particularly, the overlapping information of labels of different devices is aggregated at the central server.…”
Section: Personalized Flmentioning
confidence: 99%
“…Xie et al [11] presented a new approach called multi-center FL clustering, by constructing multiple global models and assigning them to the nearest local models. Many studies [17], [54], [55] have implemented auxiliary public dataset to minimize the biased distribution of class labels of existing clients, which accommodates the improvement of training performance in FL. Li et al [16] curated an additional public dataset that could assist the training among the local models and adopted transfer learning with knowledge distillation in the FL network.…”
Section: Related Workmentioning
confidence: 99%
“…These shortcomings imply that distance-based local weight clustering is an approach that is intractable to explain, which gives us an inspiration to design an algorithm that clusters through the given local labels. Although past several studies has emphasized that biased labels are major defect that perturbs the functionality under FL settings, they suggested alternatives [17], [54][55][56], [60], [61] such as implementing public dataset. To the best of our knowledge, there were no published articles that selects local models through evaluating their cor-responding labels with statistical metrics without using auxiliary dataset on FL framework.…”
Section: Related Workmentioning
confidence: 99%
“…(b) Local Update: In this step, if new classes are not reported, we perform simple weighted α-update [21], where α governs the contributions of new and old models across FL iterations shown in Algorithm 1 Choice 1. If new classes are reported, we train the new class data along with public dataset, and send the new model weights to global user (Choice 2).…”
Section: Proposed Frameworkmentioning
confidence: 99%