2023
DOI: 10.48550/arxiv.2302.11823
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedIL: Federated Incremental Learning from Decentralized Unlabeled Data with Convergence Analysis

Abstract: Most existing federated learning methods assume that clients have fully labeled data to train on, while in reality, it is hard for the clients to get task-specific labels due to users' privacy concerns, high labeling costs, or lack of expertise. This work considers the server with a small labeled dataset and intends to use unlabeled data in multiple clients for semi-supervised learning. We propose a new framework with a generalized model, Federated Incremental Learning (FedIL), to address the problem of how to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
(32 reference statements)
0
3
0
Order By: Relevance
“…Biased data can perpetuate historical prejudices and result in discriminatory outcomes. There are five main types bias, including selection bias [68][69][70][71], sampling bias [25,72,73], labeling bias [26,[74][75][76][77], temporal bias [78][79][80][81], aggregation bias [82][83][84][85][86], historical bias [52,[87][88][89], measurement bias [4,[90][91][92], confirmation bias, proxy bias, cultural bias, under-representation bias [93][94][95], and homophily bias [96][97][98]. Table 2 shows the comparison of the different types of data biases.…”
Section: Data Biasmentioning
confidence: 99%
“…Biased data can perpetuate historical prejudices and result in discriminatory outcomes. There are five main types bias, including selection bias [68][69][70][71], sampling bias [25,72,73], labeling bias [26,[74][75][76][77], temporal bias [78][79][80][81], aggregation bias [82][83][84][85][86], historical bias [52,[87][88][89], measurement bias [4,[90][91][92], confirmation bias, proxy bias, cultural bias, under-representation bias [93][94][95], and homophily bias [96][97][98]. Table 2 shows the comparison of the different types of data biases.…”
Section: Data Biasmentioning
confidence: 99%
“…In this fashion, the traditional FL problem is converted into an auto-encoder based FL problem that can accommodate heterogeneous data modalities in clients. Yang et al proposed Federated Incremental Learning (FedIL), a novel and general framework that employs a siamese network for contrastive learning to ensure acquiring high quality pseudo-labels during training [36]. Compared with these works, our current work adopts consistency regularization for semi-supervised learning, enabling the use of large amount of unlabeled data in clients.…”
Section: Related Workmentioning
confidence: 99%
“…Likewise, Dong et al proposed a new model called Global-Local Forgetting Compensation (GLFC) to tackle the federated class-incremental learning problem [38]. Yang et al applied KL loss to enforce the consistency between the predictions made by clients and the server during client training, while screened the uploaded client weights by cosine similarity with normalization to accelerate the convergence of model training [36]. Castellon et al proposed to address the domain shift challenge by clustering clients into groups with similar data distributions, effectively creating global models for each cluster of clients [39].…”
Section: Related Workmentioning
confidence: 99%