2021
DOI: 10.1007/978-3-030-87240-3_16
|View full text |Cite
|
Sign up to set email alerts
|

Categorical Relation-Preserving Contrastive Knowledge Distillation for Medical Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…In their framework, transfer learning was applied to the convolutional neural network for plain and hierarchical classification and used to differentiate between seven types of skin lesions. Xing et al [ 28 ] presented a Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) that was used as a supervisor of the model. They presented a class-guided contrastive distillation (CCD) module for closer image pairs from the same class as a teacher while separating negative images from different classes.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In their framework, transfer learning was applied to the convolutional neural network for plain and hierarchical classification and used to differentiate between seven types of skin lesions. Xing et al [ 28 ] presented a Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) that was used as a supervisor of the model. They presented a class-guided contrastive distillation (CCD) module for closer image pairs from the same class as a teacher while separating negative images from different classes.…”
Section: Literature Reviewmentioning
confidence: 99%
“…On the other hand, there are also many state-of-the-art methods with great performance on skin lesion classification. The Student-and-Teacher Model is also a high-performance model introduced in 2021 [ 19 ], and it is created by Xiaohan Xing et al as a combination of two models that share memories with the other model. Therefore, the models can take full advantage of what others learn.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, self-supervised deep models such as contrastive learning have shown promising results in 3D medical image classification [8,19,20,1]. The pillar of contrastive learning is to augment a similar 3D image from one patient to form a "homogeneous" pair and use images from other patients to construct multiple "heterogeneous" pairs.…”
Section: Introductionmentioning
confidence: 99%