2022
DOI: 10.48550/arxiv.2203.11635
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Feature Distribution Matching for Federated Domain Generalization

Abstract: Federated Learning (FL) facilitates distributed model learning to protect users' privacy. In the absence of labels for a new user's data, the knowledge transfer in FL allows a learned global model to adapt to the new samples quickly. The multi-source domain adaptation in FL aims to improve the model's generality in a target domain by learning domain-invariant features from different clients. In this paper, we propose Federated Knowledge Alignment (FedKA) that aligns features from different clients and those of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…Reisizadeh et al (2020) assumes the local distribution is perturbed by an affine function, i.e., from x to Ax + b. There are also some methods that aim to learn client invariant features (Peng et al, 2019;Wang et al, 2022;Sun et al, 2022;Gan et al, 2021). However, these methods are designed to learn a model that can perform well on unseen deployment distributions that different from (seen) clients' local distributions, which is out of our scope.…”
Section: Related Workmentioning
confidence: 99%
“…Reisizadeh et al (2020) assumes the local distribution is perturbed by an affine function, i.e., from x to Ax + b. There are also some methods that aim to learn client invariant features (Peng et al, 2019;Wang et al, 2022;Sun et al, 2022;Gan et al, 2021). However, these methods are designed to learn a model that can perform well on unseen deployment distributions that different from (seen) clients' local distributions, which is out of our scope.…”
Section: Related Workmentioning
confidence: 99%