2022 26th International Conference on Pattern Recognition (ICPR) 2022
DOI: 10.1109/icpr56361.2022.9956634
|View full text |Cite
|
Sign up to set email alerts
|

(2+1)D Distilled ShuffleNet: A Lightweight Unsupervised Distillation Network for Human Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…To further verify the validity of this article, we produced Tables 3 – 6 . we compared the algorithms with classical ones (Vgg16 ( Sitaula & Hossain, 2021 ), Resnet50 ( Wen, Li & Gao, 2020 ), Densenet121 ( Zhang et al, 2019 ), ResNext_34×4d-50 ( Zhou, Zhao & Wu, 2021 ), ShuffleNetV2 ( Vu, Le & Wang, 2022 ) and Mobilenetv3_ large ( Chen et al, 2022 )), advanced algorithms (Conformer ( Guo et al, 2021 ), RepMLP_B224 ( Ding et al, 2021a ), RepVGG_D2se ( Ding et al, 2021b ), ConvMixer ( Ng et al, 2022 ), and Hornet-L-GF ( Rao et al, 2022 )), transformer algorithms (DeiT-base ( Touvron et al, 2021 ), PoolFormer_M48 ( Yu et al, 2022 ), SVT_large ( Fan et al, 2022 ), EfficientFormer-l7 ( Li et al, 2022b ) and MViTv2_large ( Li et al, 2022a )) and similar algorithms for experimental comparison.…”
Section: Methodsmentioning
confidence: 99%
“…To further verify the validity of this article, we produced Tables 3 – 6 . we compared the algorithms with classical ones (Vgg16 ( Sitaula & Hossain, 2021 ), Resnet50 ( Wen, Li & Gao, 2020 ), Densenet121 ( Zhang et al, 2019 ), ResNext_34×4d-50 ( Zhou, Zhao & Wu, 2021 ), ShuffleNetV2 ( Vu, Le & Wang, 2022 ) and Mobilenetv3_ large ( Chen et al, 2022 )), advanced algorithms (Conformer ( Guo et al, 2021 ), RepMLP_B224 ( Ding et al, 2021a ), RepVGG_D2se ( Ding et al, 2021b ), ConvMixer ( Ng et al, 2022 ), and Hornet-L-GF ( Rao et al, 2022 )), transformer algorithms (DeiT-base ( Touvron et al, 2021 ), PoolFormer_M48 ( Yu et al, 2022 ), SVT_large ( Fan et al, 2022 ), EfficientFormer-l7 ( Li et al, 2022b ) and MViTv2_large ( Li et al, 2022a )) and similar algorithms for experimental comparison.…”
Section: Methodsmentioning
confidence: 99%
“…Further, they fixed the encoder as a teacher model in the last epoch to guide the training of the encoder from the current epoch in transfer learning phase. In the work presented in [36], Vu et al proposed an unsupervised distillation learning framework called (2 + 1)D Distilled ShuffleNet to train a lightweight model for human action recognition task. By leveraging the distillation technique, they developed (2+1)D Distilled ShuffleNet as an unsupervised approach, which did not require labeled data for training.…”
Section: Related Workmentioning
confidence: 99%
“…The SKD-SRL [33] method attains the lowest accuracy of 29.8% amongst all the comparative knowledge distillation-based methods for the HMDB51 dataset. The rest of comparative methods that include STDDCN [29], Prob-Distill [30], MHSA-KD [31], TY [34], (2+1)D Distilled ShuffleNet [36], and Self-Distillation (PPTK) [35] achieve accuracies of 66.8%, 72.2%, 57.8%, 32.8%, 59.9%, and 76.5%, respectively, for the HMDB51 dataset. Similarly, for the UCF101 dataset in Table 13, our proposed framework outperforms other comparative knowledge distillation-based methods by obtaining the best accuracies of 97.3% followed by the D3D [32] method which attains the second-best accuracy of 97.0%.…”
Section: 3mentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental results show that ShuffleNetV2 achieves a 63% speed improvement compared to ShuffleNetV1. In the latest research on ShuffleNet, a lightweight network called (2+1) D Distilled ShuffleNet is proposed in [92] for human action recognition using an unsupervised distillation learning paradigm. This network extracts knowledge from the teacher network through distillation techniques without the need for labeled data.…”
mentioning
confidence: 99%