2020
DOI: 10.1609/aaai.v34i04.6123
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
139
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 260 publications
(139 citation statements)
references
References 25 publications
0
139
0
Order By: Relevance
“…Inspired by mixup in image recognition and semisupervised classification [43], [57], we propose a simple but effective method that generates pseudo training images by interpolating between labeled and unlabeled images. It should be noted that although the recent work [45], [58], [59]…”
Section: B Intermediate Domain Image Generatormentioning
confidence: 99%
“…Inspired by mixup in image recognition and semisupervised classification [43], [57], we propose a simple but effective method that generates pseudo training images by interpolating between labeled and unlabeled images. It should be noted that although the recent work [45], [58], [59]…”
Section: B Intermediate Domain Image Generatormentioning
confidence: 99%
“…To adapt the classifier, the label distribution is matched by estimating a re-weighted source domain label distribution. Adversarial Domain Adaptation with Domain Mixup (ADADM) [44] advances adversarial learning by mixing transformed source and real target domain samples to train a more robust generator.…”
Section: ) Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Here we mainly focus on the recent adversarial techniques that have achieved high adaptation accuracy and demonstrated robustness in transferring heterogeneous feature spaces. Therefore, we select Adversarial Discriminative Domain Adaptation (ADDA) [39], Domain-Adversarial Neural Network (DANN) [11], Deep Adaptation Networks (DAN) [24], and Adversarial Domain Adaptation with Domain Mixup (ADADM) [44]. Also we compare with a recent domain adaptation technique that has been evaluated on the same datasets and tasks, which is Stratified Transfer Learning (STL) [6].…”
Section: ) Evaluation Processmentioning
confidence: 99%
“…[58] proposes a class-conditioned domain alignment method to reduce domain class imbalance and cross-domain class distribution shift. Some recent UDA methods [59], [60] utilize MIXUP [61] to regularization the domain classifier, which encourage learn domain-invariant features representations across domains. Much research has achieved compelling performance on image classification [36]- [40], [56], [57], while some study on the more complicated task such as semantic segmentation [62]- [65].…”
Section: Related Workmentioning
confidence: 99%