2020
DOI: 10.1609/aaai.v34i04.5943
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation via Discriminative Manifold Embedding and Alignment

Abstract: Unsupervised domain adaptation is effective in leveraging the rich information from the source domain to the unsupervised target domain. Though deep learning and adversarial strategy make an important breakthrough in the adaptability of features, there are two issues to be further explored. First, the hard-assigned pseudo labels on the target domain are risky to the intrinsic data structure. Second, the batch-wise training manner in deep learning limits the description of the global structure. In this paper, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(19 citation statements)
references
References 22 publications
1
17
0
Order By: Relevance
“…Class-level methods align the conditional distribution based on pseudo-labels (Chen et al, 2020a;Luo et al, 2020a;Li et al, 2020b;Jiang et al, 2020;Liang et al, 2020;Venkat et al, 2020). Conditional alignment methods (Xie et al, 2018;Long et al, 2018) minimize the discrepancy between conditional distributions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Class-level methods align the conditional distribution based on pseudo-labels (Chen et al, 2020a;Luo et al, 2020a;Li et al, 2020b;Jiang et al, 2020;Liang et al, 2020;Venkat et al, 2020). Conditional alignment methods (Xie et al, 2018;Long et al, 2018) minimize the discrepancy between conditional distributions.…”
Section: Related Workmentioning
confidence: 99%
“…However, the conditional distributions from different categories tend to mix together, leading to performance drop. Contrastive learning based methods resolve this issue by discriminating features from different classes (Kang et al, 2019;Chen et al, 2020a;Luo et al, 2020a), but still face the problem of pseudo-label precision.…”
Section: Introductionmentioning
confidence: 99%
“…For robustness, Saito et al [8] take into consideration the task-specific decision boundaries for better alignment of two domains. Luo et al [6] introduce a Riemannian manifold learning framework to make the representations to be both transferable and discriminative. Wang and Breckon [27] integrate supervised subspace learning with structured prediction to formulate an iterative domain adaptation framework.…”
Section: Related Workmentioning
confidence: 99%
“…As shown in Eq. 5and (6), the attention maps are determined by the features themselves. Thus, two categorysharing samples tend to have similar self-attention maps in the last layers of a network, due to the higher similarity between deeper features [44]- [46].…”
Section: Cam For Negative Association Rejectionmentioning
confidence: 99%
See 1 more Smart Citation