2021
DOI: 10.1109/access.2021.3087867
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation Based on Pseudo-Label Confidence

Abstract: Unsupervised domain adaptation aims to align the distributions of data in source and target domains, as well as assign the labels to data in the target domain. In this paper, we propose a new method named Unsupervised Domain Adaptation based on Pseudo-Label Confidence (UDA-PLC). Concretely, UDA-PLC first learns a new feature representation by projecting data of source and target domains into a latent subspace. In this subspace, the distribution of data in two domains are aligned and the discriminability of fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
“…To demonstrate the efficiency of our VTL, the results of our experiments are compared with several unsupervised domain adaptation methods including EMFS (2018) [40], EasyTL (2019) [41], STJML (2020) [42], GEF (2019) [43], DWDA (2021) [44], CDMA (2020) [45], ALML (2022) [46], TTLC (2021) [33], SGA-MDAP (2020) [47], NSO (2020) [48], FSUTL (2020) [49], PLC (2021) [50], GSI (2021) [51] and ICDAV (2022) [52]. In the experiments, VTL begins with learning a domain invariant and class discriminative latent feature space according to Equation (18).…”
Section: Resultsmentioning
confidence: 99%
“…But this unbiased assignment in pseudo labels easily contains noise. To relieve this issue, common solutions include combining pseudo labels from various models [6][7][8] and learning self-defined classifier function [9][10][11]. In addition, pseudo labels after biased selection participate in feature mapping learning, which is benefit to improve the effect of transfer learning.…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised Domain Adaptation based on Pseudo-Label Confidence (UDA-PLC) [27] by projecting data of source and target domains into a latent subspace aligns the distribution of data in two domains and improves the discriminability of features in both domains. Then, UDA-PLC applies Structured Prediction (SP) and Nearest Class Prototype (NCP) to predict pseudo-labels of data in the target domain, and it performs sample selection in every iteration.…”
Section: Introductionmentioning
confidence: 99%