2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00835
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation with Similarity Learning

Abstract: The objective of unsupervised domain adaptation is to leverage features from a labeled source domain and learn a classifier for an unlabeled target domain, with a similar but different data distribution. Most deep learning approaches to domain adaptation consist of two steps: (i) learn features that preserve a low risk on labeled samples (source domain) and (ii) make the features from both domains to be as indistinguishable as possible, so that a classifier trained on the source can also be applied on the targ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
133
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 240 publications
(133 citation statements)
references
References 36 publications
0
133
0
Order By: Relevance
“…Under this motivation, multiple methods have been used to align the distributions of two domains, such as maximum mean discrepancy (MMD) [6,7,18], CORrelation ALignment (CORAL) [8,21], attention [22], and optimal transport [23]. Besides, adversarial learning is also used to learn domain-invariant features [9,10,20,24]. On par with these methods aligning distributions in the feature space, some methods align distributions in raw pixel space by translating source data to the target domain with Image to Image translation techniques [25][26][27][28][29][30].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Under this motivation, multiple methods have been used to align the distributions of two domains, such as maximum mean discrepancy (MMD) [6,7,18], CORrelation ALignment (CORAL) [8,21], attention [22], and optimal transport [23]. Besides, adversarial learning is also used to learn domain-invariant features [9,10,20,24]. On par with these methods aligning distributions in the feature space, some methods align distributions in raw pixel space by translating source data to the target domain with Image to Image translation techniques [25][26][27][28][29][30].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the second term is also expected to be small by optimizing the domain-invariant features between S and T . The third term is treated as a negligibly small term and is usually disregarded by previous methods [7,9,20]. However, a large C may hurt the performance on the target domain [43].…”
Section: Theoretical Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…In the context of drug response prediction, Mourragui et al (Mourragui et al, 2019) proposed PRECISE, a subspace-centric method, based on principal component analysis to minimize the discrepancy in the input space between cell lines and patients. Recently, adversarial domain adaptation has shown great performance in addressing the discrepancy in the input space for different applications, and its performance is comparable to the metric-based and subspace-centric methods in computer vision (Hosseini-Asl et al, 2018;Pinheiro, 2018;Zou et al, 2018;Tsai et al, 2018;Long et al, 2018;Chen et al, 2017;Tzeng et al, 2017;Ganin and Lempitsky, 2014). However, adversarial adaptation that addresses the discrepancies in both the input and output spaces has not yet been explored neither for pharmacogenomics nor for other applications.…”
Section: Introductionmentioning
confidence: 99%