2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01395
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced Transport Distance for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 163 publications
(63 citation statements)
references
References 4 publications
0
55
0
Order By: Relevance
“…The classification accuracy in Table 1 is reported after 100 epochs of training. The state-of-the-art methods we used as comparison in supervised setting(S) are CCSA [2], d-SNE [7] and FADA [8], we also include several state-ofthe-art unsupervised methods(U) [5,6,4] for comparison. Cross-entropy (CE) classification for source and target data are also compared as a baseline.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The classification accuracy in Table 1 is reported after 100 epochs of training. The state-of-the-art methods we used as comparison in supervised setting(S) are CCSA [2], d-SNE [7] and FADA [8], we also include several state-ofthe-art unsupervised methods(U) [5,6,4] for comparison. Cross-entropy (CE) classification for source and target data are also compared as a baseline.…”
Section: Methodsmentioning
confidence: 99%
“…The typical solution is to use another available dataset on a closely related task, which leads to the problem of domain adaptation [1]. Existing domain adaptation methods can be supervised [2], semi-supervised [3] or unsupervised [4,5,6]. The main task is to learn the knowledge from the source domain, which is adapted to the target domain.…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial based methods [9,33] alternatively optimize the feature generator and domain discriminator, which are respectively supposed to be domain-confusable and discriminative, to achieve domain confusion. Extended from the marginal distribution assumption, recent works [4,6,20,22,23] show that the models yield promising results by introducing the label information. Joint Adaptation Network (JAN) [23] builds a joint distribution alignment model via the features from different hidden layers.…”
Section: Related Workmentioning
confidence: 99%
“…Ben-David et al [2] give a theoretical insight into the domain adaptation problem, they show that the risk of the target domain is mainly bounded by the risk of the source domain and the discrepancy between distributions of two domains. Inspired by this theory, many methods are proposed to mitigate the discrepancy between feature distributions of the source and target domains, e.g., explicit discrepancy minimization via Maximum Mean Discrepancy (MMD) [13,21], domain invariant feature learning [26], Optimal Transport (OT) based feature matching [7,20,37], manifold based feature alignment [10], statistical moment matching [21,32] and adversarial domain adaptation [9]. These methods are proved to be effective in minimizing the marginal discrepancy and alleviating the domain shift problem.…”
Section: Introductionmentioning
confidence: 99%
“…We compare GET with several SOTA UDA methods: DAN (Long et al, 2015), DANN (Ganin & Lempitsky, 2015), SAFN (Xu et al, 2019), ETD (Li et al, 2020), ALDA (Chen et al, 2020), DMP (Luo et al, 2020), BNM (Cui et al, 2020), DWL (Xiao & Zhang, 2021).…”
Section: Comparisonmentioning
confidence: 99%