2023
DOI: 10.1109/tpami.2022.3146234
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Heterogeneous Domain Adaptation: Theory and Algorithms

Abstract: Semi-supervised heterogeneous domain adaptation (SsHeDA) aims to train a classifier for the target domain, in which only unlabeled and a small number of labeled data are available. This is done by leveraging knowledge acquired from a heterogeneous source domain. From algorithmic perspectives, several methods have been proposed to solve the SsHeDA problem; yet there is still no theoretical foundation to explain the nature of the SsHeDA problem or to guide new and better solutions. Motivated by compatibility con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(7 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…CWAN learns a feature transformer, a domain discriminator, and a classifier through the weighted adversarial approach, in order to achieve cross-domain adaptation. JMEA [38] is a semi-supervised heterogeneous domain adaptation (SsHeDA) algorithm, which can deal with massive datasets. In HCSA [39], structure preservation, distribution, and classification space alignment are implemented, jointly with feature representation by transferring both the source-domain representation and model knowledge.…”
Section: Resultsmentioning
confidence: 99%
“…CWAN learns a feature transformer, a domain discriminator, and a classifier through the weighted adversarial approach, in order to achieve cross-domain adaptation. JMEA [38] is a semi-supervised heterogeneous domain adaptation (SsHeDA) algorithm, which can deal with massive datasets. In HCSA [39], structure preservation, distribution, and classification space alignment are implemented, jointly with feature representation by transferring both the source-domain representation and model knowledge.…”
Section: Resultsmentioning
confidence: 99%
“…The experimental results illustrate that VAL outperforms than other three defuzzification methods. Therefore, (19) is used as the defuzzification function in all subsequent experiments.…”
Section: Construct Fuzzy Classifiers For Solving Mcimo Problemmentioning
confidence: 99%
“…Zhang et al (2022) propose Masked OT model as a regularization term to preserve the local feature invariances between fine-tuned and pretrained graph neural network (GNN) for the fine-tuning of GNNs. Though the Masked OT (Zhang et al, 2022) shares similar spirits to the mask-based modeling in our method, our main contribution is the relation preservation for imposing the guidance of keypoints, different from (Zhang et al, 2022). For our mask-based modeling, it is utilized to impose the matching of keypoints with theoretical guarantee.…”
Section: Related Workmentioning
confidence: 99%
“…For our mask-based modeling, it is utilized to impose the matching of keypoints with theoretical guarantee. While the mask in (Zhang et al, 2022) aims to preserve the local information of finetuned from pretrained models. The motivation and design of our mask are different from those in (Zhang et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation