2018
DOI: 10.1007/978-3-030-01225-0_28
|View full text |Cite
|
Sign up to set email alerts
|

DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation

Abstract: In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e.g. same classes), but also different latent data structures (e.g. different acquisition conditions). In such a situation, the model will perform poorly on the new data, since the classifier is specialized to recognize visual cues specific to the source domain. In this work we explore a solution, named DeepJDOT, to tac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
146
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 251 publications
(172 citation statements)
references
References 34 publications
(63 reference statements)
0
146
0
Order By: Relevance
“…We aim at leveraging the available information {X s , Y s , X t } to learn a classifier f , that is a labeling functionf which approximates f s and is closer to f t than any other functionf s . In order to solve this unsupervised domain adaptation problem, the Optimal Transport theory can be employed Damodaran et al, 2018).…”
Section: Problem Statementmentioning
confidence: 99%
See 3 more Smart Citations
“…We aim at leveraging the available information {X s , Y s , X t } to learn a classifier f , that is a labeling functionf which approximates f s and is closer to f t than any other functionf s . In order to solve this unsupervised domain adaptation problem, the Optimal Transport theory can be employed Damodaran et al, 2018).…”
Section: Problem Statementmentioning
confidence: 99%
“…Subsequently, Damodaran et al proposed a deep learning strategy to solve these two drawbacks (Damodaran et al, 2018). Their Deep-JDOT framework (i) minimizes the cost c in a deep layer of a Convolutional Neural Network, which is more informative than the original image space, and (ii) solves the problem with a stochastic approximation via mini-batches from the source and target domains.…”
Section: Optimal Transport For Unsupervised Domain Adaptationmentioning
confidence: 99%
See 2 more Smart Citations
“…MMD measures the distances between the distance between distributions simply as the distance between the mean of embedding features. In contrast, more recent techniques that use a shared deep encoder, employ the Wasserstein metric [42] to address UDA [4], [6]. Wasserstein metric is shown to be a more accurate probability metric and can be minimized effectively by deep learning first-order optimization techniques.…”
Section: Background and Related Workmentioning
confidence: 99%