2021 IEEE International Symposium on Circuits and Systems (ISCAS) 2021
DOI: 10.1109/iscas51556.2021.9401438
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Distillation Based on Positive-Unlabeled Classification and Attention Mechanism

Abstract: With the rapid development of deep learning, convolutional neural networks(CNNs) have achieved great success. But these high-capability CNNs often with a huge burden of computation and memory, which hinders these CNNs from applying to practical application. To solve this problem, in this paper, we proposed a method to train a compact model with high-capacity. The student network with fewer parameters and calculations will learning from the knowledge of the teacher network with more parameters and calculations.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 13 publications
0
0
0
Order By: Relevance
“…For efficient DNN acceleration without model accuracy degradation, there have been many studies regarding lightweight deep learning techniques such as network pruning [15], [16], clustering [17], [18], knowledge distillation [19], [20] and hardware optimization [21], [22]. These techniques can be considered for lightweight DNN processing, but it is not easy to apply in practical applications due to their irregularities and dependencies on the neural network [23]. On the other hand, a quantization technique has been proposed as the simplest and most powerful method of lightweight DNN.…”
Section: Introductionmentioning
confidence: 99%
“…For efficient DNN acceleration without model accuracy degradation, there have been many studies regarding lightweight deep learning techniques such as network pruning [15], [16], clustering [17], [18], knowledge distillation [19], [20] and hardware optimization [21], [22]. These techniques can be considered for lightweight DNN processing, but it is not easy to apply in practical applications due to their irregularities and dependencies on the neural network [23]. On the other hand, a quantization technique has been proposed as the simplest and most powerful method of lightweight DNN.…”
Section: Introductionmentioning
confidence: 99%