2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00250
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Domain Adaptation

Abstract: Recently, contrastive self-supervised learning has become a key component for learning visual representations across many computer vision tasks and benchmarks. However, contrastive learning in the context of domain adaptation remains largely underexplored. In this paper, we propose to extend contrastive learning to a new domain adaptation setting, a particular situation occurring where the similarity is learned and deployed on samples following different probability distributions without access to labels. Cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(29 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…Prior works have explored the application of self-supervision to UDA. However, to the best of our knowledge they have all used domain distance either as part of their objective (Kang et al, 2019;Wang et al, 2021;Thota & Leontidis, 2021) or for model selection (Sun et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Prior works have explored the application of self-supervision to UDA. However, to the best of our knowledge they have all used domain distance either as part of their objective (Kang et al, 2019;Wang et al, 2021;Thota & Leontidis, 2021) or for model selection (Sun et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Representative techniques include latent distribution alignment between the source and target domains (Tzeng et al 2017;Hoffman et al 2018;Long et al 2017). Contrastive learning is used to extract discriminative features between classes (Kang et al 2019;Thota and Leontidis 2021), and the memory module is used to augment target features using incremental information (Asghar et al 2020;Zheng and Yang 2019;Liu et al 2020). A long-standing problem in domain adaptation is negative transfer, which refers to the abnormal scenarios when the source domain data causes reduced learning performance in the target domain due to a large discrepancy in data distributions (Wang et al 2019;Zhang et al 2020).…”
Section: Domain Adaptationmentioning
confidence: 99%
“…According to (Thota & Leontidis, 2021) CL has become a key approach for unsupervised learning tasks with unlabeled datasets (Chen et al, 2020; K. Jaiswal et al, 2020). Broadly, one sample from the unlabeled dataset is taken as a so-called anchor and a strongly augmented version of this sample is considered a positive sample.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…(Jaiswal et al, 2020;Shen et al, 2022) Here, train and test samples are from the same dataset and distribution. A major problem of the contrastive self-supervised architecture is the occurrence of false negatives in the negative samples (Thota & Leontidis, 2021).…”
Section: Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation