2021
DOI: 10.48550/arxiv.2107.03008
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Invariant Representation with Consistency and Diversity for Semi-supervised Source Hypothesis Transfer

Abstract: Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain by utilizing transferable information learned from the available source domain and a few labeled target data. However, source data is not always accessible in practical scenarios, which restricts the application of SSDA in real world circumstances. In this paper, we propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 40 publications
(102 reference statements)
0
3
0
Order By: Relevance
“…Model adaptation (MA) was proposed to implement domain adaptation without access to the source data, thereby addressing the dilemma of data sharing versus data privacy [8] in traditional domain adaptation. Many works have been proposed to address the problem of MA [8], [9], [11], [12], [25]- [36], and the methods can be roughly categorized into two streams: generative [25], [27] and discriminative [8], [9], [26]. Generative methods usually model the generation of labeled images or features.…”
Section: B Model Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…Model adaptation (MA) was proposed to implement domain adaptation without access to the source data, thereby addressing the dilemma of data sharing versus data privacy [8] in traditional domain adaptation. Many works have been proposed to address the problem of MA [8], [9], [11], [12], [25]- [36], and the methods can be roughly categorized into two streams: generative [25], [27] and discriminative [8], [9], [26]. Generative methods usually model the generation of labeled images or features.…”
Section: B Model Adaptationmentioning
confidence: 99%
“…We compared our method with i) baseline methods: only train the labeled target samples with cross entropy loss (CE), and CE with entropy minimization for unlabeled target samples (ENT) [58], ii) semi-supervised domain adaptation (SSDA) methods: minimax entropy (MME) [6], CDAC [7], iii) semi-supervised learning (SSL) methods: MixMatch [59], FixMatch [15], iv) semi-supervised model adaptation (SSMA) methods: SHOT++ [32], SSHT [12], and v) a universal model adaptation method: UMA [29]. To make a fair comparison, we implement them with the same F (•|θ s ) as used in our method.…”
Section: B Comparison Experimentsmentioning
confidence: 99%
“…Although promising, existing SSDA methods usually assume that the source data is available during training, which is impractical in many real-world scenarios where restrictions apply, e.g., data privacy and limited storage [28]. To meet such new demands, a new research topic, namely the model adaptation [28,32,33,50,57], has recently been proposed with the aim of transferring knowledge from a pretrained source model rather than the source data. To simplify the problem, most of these works assume that the source and target domain share the same label set.…”
Section: Introductionmentioning
confidence: 99%