2021
DOI: 10.48550/arxiv.2101.12727
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining and Consistency

Abstract: Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain. A range of prior work uses adversarial domain alignment to try and learn a domain invariant feature space, where a good source classifier can perform well on target data. This however, can lead to errors where class A features in the target domain get aligned to class B features in source. We show that in the presence of a few target labels, simple techniques like selfs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…Self-supervised methods learn useful representations by training on unlabeled data via auxiliary proxy tasks. Common approaches include reconstruction tasks (Vincent et al, 2008;Erhan et al, 2010;Devlin et al, 2019;Gidaris et al, 2018;Lewis et al, 2020), and contrastive learning (He et al, 2020;Chen et al, 2020b;Caron et al, 2020;Radford et al, 2021b), and recent work has shown that self-supervised methods can reduce dependence on spurious correlations and improve performance on domain adaptation tasks (Wang et al, 2021;Tsai et al, 2021;Mishra et al, 2021). We use these self-supervision methods for unsupervised adaptation by first pre-training models on the unlabeled data, and then finetuning them on the labeled source data (Shen et al, 2021).…”
Section: Algorithmsmentioning
confidence: 99%
“…Self-supervised methods learn useful representations by training on unlabeled data via auxiliary proxy tasks. Common approaches include reconstruction tasks (Vincent et al, 2008;Erhan et al, 2010;Devlin et al, 2019;Gidaris et al, 2018;Lewis et al, 2020), and contrastive learning (He et al, 2020;Chen et al, 2020b;Caron et al, 2020;Radford et al, 2021b), and recent work has shown that self-supervised methods can reduce dependence on spurious correlations and improve performance on domain adaptation tasks (Wang et al, 2021;Tsai et al, 2021;Mishra et al, 2021). We use these self-supervision methods for unsupervised adaptation by first pre-training models on the unlabeled data, and then finetuning them on the labeled source data (Shen et al, 2021).…”
Section: Algorithmsmentioning
confidence: 99%