2021
DOI: 10.48550/arxiv.2103.15566
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive Domain Adaptation

Abstract: Recently, contrastive self-supervised learning has become a key component for learning visual representations across many computer vision tasks and benchmarks. However, contrastive learning in the context of domain adaptation remains largely underexplored. In this paper, we propose to extend contrastive learning to a new domain adaptation setting, a particular situation occurring where the similarity is learned and deployed on samples following different probability distributions without access to labels. Cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 37 publications
(61 reference statements)
0
3
0
Order By: Relevance
“…However, contrastive learning in the context of DA remains underexplored. In Thota and Leontidis (2021), the authors propose to extend the contrastive learning approach to a situation where none of the data domains contain any labeled data. In our work, we combine ideas of contrastive learning on unlabeled samples and supervised learning using the labeled source domain data to build our UDA method for astronomy.…”
Section: Methodsmentioning
confidence: 99%
“…However, contrastive learning in the context of DA remains underexplored. In Thota and Leontidis (2021), the authors propose to extend the contrastive learning approach to a situation where none of the data domains contain any labeled data. In our work, we combine ideas of contrastive learning on unlabeled samples and supervised learning using the labeled source domain data to build our UDA method for astronomy.…”
Section: Methodsmentioning
confidence: 99%
“…However, contrastive learning in the context of domain adaptation remains underexplored. In Thota and Leontidis (2021), authors propose to extend contrastive learning approach to a situation where none of the data domains contain any labeled data. In our work we combine ideas of contrastive learning on unlabeled samples and supervised learning using the labeled source domain data to build our UDA method for astronomy.…”
Section: Methodsmentioning
confidence: 99%
“…To address the sparse reward problem, several expert trajectories are incorporated into the replay buffer. However, CURL, and more generally, selfcontrastive representation learning methods, exhibit limited performance in domain adaptation, particularly in bridging the Sim2Real gap [27]. To overcome these challenges, we combine CURL with segmentation-driven domain shifts to enhance its Sim2Real transferability.…”
Section: B Rl Based Visual Controlmentioning
confidence: 99%