2022
DOI: 10.48550/arxiv.2205.06701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Knowledge Distillation Meets Open-Set Semi-Supervised Learning

Abstract: Existing knowledge distillation methods mostly focus on distillation of teacher's prediction and intermediate activation. However, the structured representation, which arguably is one of the most critical ingredients of deep models, is largely overlooked. In this work, we propose a novel Semantic Representational Distillation (SRD) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student. The key idea is that we leverage the teacher's classifier as a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…Prior works on open-set semi-supervised learning [48,49,50,51,52,53,54,55,56] have primarily focused on image classification tasks. For example, MTC [49] utilizes a joint optimization framework to estimate the OOD score of unlabeled images, which is achieved by alternately updating network parameters and estimated scores.…”
Section: Pseudo Labelmentioning
confidence: 99%
“…Prior works on open-set semi-supervised learning [48,49,50,51,52,53,54,55,56] have primarily focused on image classification tasks. For example, MTC [49] utilizes a joint optimization framework to estimate the OOD score of unlabeled images, which is achieved by alternately updating network parameters and estimated scores.…”
Section: Pseudo Labelmentioning
confidence: 99%