2022
DOI: 10.1109/tpds.2021.3136673
|View full text |Cite
|
Sign up to set email alerts
|

FRuDA: Framework for Distributed Adversarial Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Addressing distribution shifts is a key problem in FL, while most existing works focusing on label distribution skew through techniques such as training robust global models (Li et al, 2018c(Li et al, , 2021 or variance reduction methods (Karimireddy et al, 2020b,a). As another line of research, studies about feature distribution skew in FL mostly focus on domain generalization to train models that can generalize to unseen feature distributions (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021). All of the above methods aim to train a single robust model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Addressing distribution shifts is a key problem in FL, while most existing works focusing on label distribution skew through techniques such as training robust global models (Li et al, 2018c(Li et al, , 2021 or variance reduction methods (Karimireddy et al, 2020b,a). As another line of research, studies about feature distribution skew in FL mostly focus on domain generalization to train models that can generalize to unseen feature distributions (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021). All of the above methods aim to train a single robust model.…”
Section: Related Workmentioning
confidence: 99%
“…Many studies focus on adapting DG algorithms for FL scenarios. For example, combining FL with Distribution Robust Optimization (DRO), resulting in robust models that perform well on all clients (Mohri et al, 2019;Deng et al, 2021); combining FL with techniques that learn domain invariant features (Peng et al, 2019;Wang et al, 2022a;Shen et al, 2021;Sun et al, 2022;Gan et al, 2021) to improve the generalization ability of trained models. All of the above methods aim to train a single robust feature extractor that can generalize well on unseen distributions.…”
Section: A Proof Of Em Stepsmentioning
confidence: 99%
“…Reisizadeh et al (2020) assumes the local distribution is perturbed by an affine function, i.e., from x to Ax + b. There are also some methods that aim to learn client invariant features (Peng et al, 2019;Wang et al, 2022;Sun et al, 2022;Gan et al, 2021). However, these methods are designed to learn a model that can perform well on unseen deployment distributions that different from (seen) clients' local distributions, which is out of our scope.…”
Section: Related Workmentioning
confidence: 99%