2021
DOI: 10.1109/access.2021.3136567
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Collaborative Learning for Semi-Supervised Domain Adaptation

Abstract: Recently, Semi-supervised Domain Adaptation (SSDA) has become more practical because a small number of labeled target samples can significantly boost the empirical target performance when using SSDA. Several current methods focus on prototype-based alignment to achieve cross-domain invariance in which the labeled samples from the source and target domains are concatenated to estimate the prototypes. The model is then trained to assign the unlabeled target data to the prototype within the same class. However, s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 35 publications
0
15
0
Order By: Relevance
“…Specifically, [1], [11], [13], [15], [29]- [31], [33], [39] utilized a combination of a CNN encoder followed by the MLP classifier in both stages. To alleviate bias confirmation, [9], [12] introduced a new framework by adding one more MLP classifier, which consists of a single CNN encoder and two MLP classifiers. [14] was the first SSDA approach that used two CNN encoders and two MLP classifiers with the co-training strategy to boost the classification accuracy.…”
Section: B Variants Of Ssda Frameworkmentioning
confidence: 99%
See 4 more Smart Citations
“…Specifically, [1], [11], [13], [15], [29]- [31], [33], [39] utilized a combination of a CNN encoder followed by the MLP classifier in both stages. To alleviate bias confirmation, [9], [12] introduced a new framework by adding one more MLP classifier, which consists of a single CNN encoder and two MLP classifiers. [14] was the first SSDA approach that used two CNN encoders and two MLP classifiers with the co-training strategy to boost the classification accuracy.…”
Section: B Variants Of Ssda Frameworkmentioning
confidence: 99%
“…The semi-supervised domain adaptation scenario, as depicted in [9]- [18], is extensively employed to yield remarkable classification accuracy compared to the unsupervised domain adaptation setting [19]- [23]. This is because a model trained under UDA is only accessed to labeled source data, while a model trained with the SSDA setting benefits from the extra target information with a few labeled target samples besides labeled source data.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations