2021
DOI: 10.48550/arxiv.2110.00726
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Domain-Specific Bias Filtering for Single Labeled Domain Generalization

Abstract: Domain generalization (DG) utilizes multiple labeled source datasets to train a generalizable model for unseen target domains. However, due to expensive annotation costs, the requirements of labeling all the source data are hard to be met in realworld applications. In this paper, we investigate a Single Labeled Domain Generalization (SLDG) task with only one source domain being labeled, which is more practical and challenging than the Conventional Domain Generalization (CDG). A major obstacle in the SLDG task … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…Unsupervised domain adaptation (UDA) [3,4,5,12,13,14,15,16,17,18,19,20,21,22,23,24] aims to adapt the model trained on a labeled source domain to an unlabeled target domain when there is distinct domain divergence. A series of UDA algorithms [25,26,27,28,29,30,31] have been proposed by employing an adversarial learning strategy where the semantic features of the source and target data are aligned for reducing domain divergence.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Unsupervised domain adaptation (UDA) [3,4,5,12,13,14,15,16,17,18,19,20,21,22,23,24] aims to adapt the model trained on a labeled source domain to an unlabeled target domain when there is distinct domain divergence. A series of UDA algorithms [25,26,27,28,29,30,31] have been proposed by employing an adversarial learning strategy where the semantic features of the source and target data are aligned for reducing domain divergence.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Semi-Supervised Domain Generalization (SSDG) [36,47,59,65,70] aims to reduce the reliance of DG on annotation via pseudolabeling [59], consistency learning [70], or bias filtering [65]. For example, StyleMatch [70] combines consistency learning, model uncertainty learning, and style augmentation to utilize the annotation for improving model robustness.…”
Section: Related Workmentioning
confidence: 99%
“…assumption, hence may not be favorably extended to the generalization scenarios under distinct domain shifts. Semi-supervised domain generalization (SSDG) [36,47,59,65,70] tackles domain shift under the SSL setting. But some data directly assumed to be labeled in this task might not be helpful for generalization improvement but could increase the annotation costs.…”
Section: Introductionmentioning
confidence: 99%