2020
DOI: 10.48550/arxiv.2002.07953
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Universal Domain Adaptation through Self Supervision

Kuniaki Saito,
Donghyun Kim,
Stan Sclaroff
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
51
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(52 citation statements)
references
References 0 publications
0
51
1
Order By: Relevance
“…A surge of research interest has been recently attracted to unsupervised learning [43]. Early efforts dedicate to designing pretext tasks [17,19,30,70], which are proven beneficial for UDA when utilized as auxiliary tasks on target data [48,54,67]. The gap with supervised learning is considerably closed by a few prominent works [10,24] that build on contrastive learning.…”
Section: Unsupervised Representation Learningmentioning
confidence: 99%
“…A surge of research interest has been recently attracted to unsupervised learning [43]. Early efforts dedicate to designing pretext tasks [17,19,30,70], which are proven beneficial for UDA when utilized as auxiliary tasks on target data [48,54,67]. The gap with supervised learning is considerably closed by a few prominent works [10,24] that build on contrastive learning.…”
Section: Unsupervised Representation Learningmentioning
confidence: 99%
“…In domain adaptation some recent approaches [5,36,41] have used selfsupervision tasks as an auxiliary objective to regularize their model. Saito et al [30], used a self-supervised feature space clustering objective for universal domain adaptation. PAC differs from these approaches in that we use rotation prediction to pretrain our feature extractor.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Prior domain adaptation methods first extract discriminative features on the source domain guided by source supervision. Then, they align the target features with the source features by: minimizing maximum mean discrepancy [19], minimizing maximum discrepancy of domain distributions [33,42], feature-level or pixel-level adversarial domain classifier based learning [7,14,18,36], entropy optimization [32,18,31], and finding matching pairs across domains based on optimal transport [1,4,34,38] or nearest neighbors [24,10]. Some semi-supervised learning techniques such as entropy minimization [9], pseudo-labeling [16], and Virtual Adversarial Training (VAT) [21] have been often used in domain adaptation (e.g., [17,31,44]).…”
Section: Related Workmentioning
confidence: 99%
“…Instance Discrimination [39] learns an embedding which maps visually similar images closer to each other and far from dissimilar images by classifying an image as its own unique class. Other methods propose to cluster local neighborhoods [3,15,32,43] within the same domain.…”
Section: Related Workmentioning
confidence: 99%