2021
DOI: 10.1109/tnnls.2020.2995648
|View full text |Cite
|
Sign up to set email alerts
|

Learning Target-Domain-Specific Classifier for Partial Domain Adaptation

Abstract: Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This paper focuses on a more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the target label space is subsumed to the source label sp… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 37 publications
(80 reference statements)
0
7
0
Order By: Relevance
“…In particular, there are totally 25 classes (the first 25 in the alphabetical order) out of 65 classes in the target domain for Office-Home, while the first 6 classes in the alphabetical order out of 12 classes are included in the target domain for VISDA-C. Results of our methods and previous state-of-the-art PDA methods [105], [106], [107], [108] are shown in Table 7. As explained in Section 3.6, β = 0 is utilized in all of our methods here.…”
Section: Results Of Object Recognition Beyond Vanilla Udamentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, there are totally 25 classes (the first 25 in the alphabetical order) out of 65 classes in the target domain for Office-Home, while the first 6 classes in the alphabetical order out of 12 classes are included in the target domain for VISDA-C. Results of our methods and previous state-of-the-art PDA methods [105], [106], [107], [108] are shown in Table 7. As explained in Section 3.6, β = 0 is utilized in all of our methods here.…”
Section: Results Of Object Recognition Beyond Vanilla Udamentioning
confidence: 99%
“…For vanilla unsupervised DA in digit recognition, we compare SHOT with ADDA [5], ADR [56], CDAN [42], CyCADA [8], CAT [96], SWD [97] and STAR [98]; for object recognition, we compare ours with DANN [24], DAN [4], SAFN [82], BSP [99], MDD [100], TransNorm [55], DSBN [25], BNM [101] and GVB-GD [102]. For partial-set DA tasks, we compare ours with IWAN [103], SAN [89], ETN [104], DRCN [105], RTNet adv [106], BA 3 US [107], and TSCDA [108]. For multi-source UDA, we compare ours with DCTN [109], MCD [6], WBN [110], M 3 SDA-β [18], CMSS [111], and CMSS [111].…”
Section: Setupmentioning
confidence: 99%
“…DA methods aim to find a latent space for source and target domains, so that the discrimination information in the source domain can be efficiently transferred to the recognition task in the target domain [22]- [25]. Generally, DA methods can be divided into three categories: unsupervised DA methods, weakly-supervised DA methods, and semi-supervised DA methods.…”
Section: B Domain Adaptation Methodsmentioning
confidence: 99%
“…In practice, visual discrepancies among the clinical datasets could be large [39,40,41,42]. For example, DRIVE dataset contains 33 images without any sign of diabetic retinopathy and 7 images with signs of mild early diabetic retinopathy.…”
Section: Evaluation On Cross-datasetsmentioning
confidence: 99%