2021 International Conference on Visual Communications and Image Processing (VCIP) 2021
DOI: 10.1109/vcip53242.2021.9675335
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition

Abstract: Knowledge distillation is an effective transfer of knowledge from a heavy network (teacher) to a small network (student) to boost students' performance. Self-knowledge distillation, the special case of knowledge distillation, has been proposed to remove the large teacher network training process while preserving the student's performance. This paper introduces a novel Self-knowledge distillation approach via Siamese representation learning, which minimizes the difference between two representation vectors of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 29 publications
(30 reference statements)
0
4
0
1
Order By: Relevance
“…For instance, the proposed method achieves the best accuracy of 92.8% amongst all the comparative methods, followed by the runner-up D3D method D3D [32], which obtains an accuracy of 78.7%. The SKD-SRL [33] method attains the lowest accuracy of 29.8% amongst all the comparative knowledge distillation-based methods for the HMDB51 dataset. The rest of comparative methods that include STDDCN [29], Prob-Distill [30], MHSA-KD [31], TY [34], (2+1)D Distilled ShuffleNet [36], and Self-Distillation (PPTK) [35] achieve accuracies of 66.8%, 72.2%, 57.8%, 32.8%, 59.9%, and 76.5%, respectively, for the HMDB51 dataset.…”
Section: 3mentioning
confidence: 98%
See 3 more Smart Citations
“…For instance, the proposed method achieves the best accuracy of 92.8% amongst all the comparative methods, followed by the runner-up D3D method D3D [32], which obtains an accuracy of 78.7%. The SKD-SRL [33] method attains the lowest accuracy of 29.8% amongst all the comparative knowledge distillation-based methods for the HMDB51 dataset. The rest of comparative methods that include STDDCN [29], Prob-Distill [30], MHSA-KD [31], TY [34], (2+1)D Distilled ShuffleNet [36], and Self-Distillation (PPTK) [35] achieve accuracies of 66.8%, 72.2%, 57.8%, 32.8%, 59.9%, and 76.5%, respectively, for the HMDB51 dataset.…”
Section: 3mentioning
confidence: 98%
“…Continuing research efforts in the same directions, Vu et al [33] proposed a selfknowledge distillation method based on siamese representation learning. We note that a siamese representation learning leverages a siamese neural network, which is sometimes also referred to as a twin neural network.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Tuy nhiên, chưng cất kiến thức vẫn còn tồn tại một số nhược điểm chẳng hạn như: Tốn nhiều chi phí về mặt thời gian và bộ nhớ do phải huấn luyện mạng giáo viên nặng, khoảng cách năng lực giữa mạng giáo viên và mạng sinh viên lớn dẫn đến sinh viên không "tiếp thu" được kiến thức từ mạng giáo viên truyền tới [12]. Chính vì vậy, phương pháp tự chưng cất kiến thức được đề xuất để khắc phục hạn chế này, trong phương pháp tự chưng cất kiến thức, không tồn tại mạng giáo viên, do đó, mạng sinh viên sẽ tự chắt lọc và chuyển giao kiến thức cho chính bản thân mình [13], [14].…”
Section: Hình 1 Tổng Quan Về Chưng Cất Kiến Thứcunclassified