2023
DOI: 10.1109/jsen.2023.3255408
|View full text |Cite
|
Sign up to set email alerts
|

E2CNN: An Efficient Concatenated CNN for Classification of Surface EMG Extracted From Upper Limb

Abstract: If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 34 publications
(30 reference statements)
0
10
0
Order By: Relevance
“…We conduct a comprehensive comparison of our proposed gesture classification method with existing CNN-based gesture classification models, including GengNet [42], Cheng et al [43], Wei et al [44], E2CNN [45], Yang et al [46], Zhai et al [25], Ding et al [47], Chen et al [26], Vitale et al [48], Peng et al [49], AtzoriNet [50], CNNLM [51], EVCNN [24], Hu et al [52], Pizzolato et al [53], MSCNet [54], DVMSCNN [55], and MV-CNN [41]. These models, like ours, are CNN-based classifiers that independently classify each frame of EMG data.…”
Section: Compared Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We conduct a comprehensive comparison of our proposed gesture classification method with existing CNN-based gesture classification models, including GengNet [42], Cheng et al [43], Wei et al [44], E2CNN [45], Yang et al [46], Zhai et al [25], Ding et al [47], Chen et al [26], Vitale et al [48], Peng et al [49], AtzoriNet [50], CNNLM [51], EVCNN [24], Hu et al [52], Pizzolato et al [53], MSCNet [54], DVMSCNN [55], and MV-CNN [41]. These models, like ours, are CNN-based classifiers that independently classify each frame of EMG data.…”
Section: Compared Methodsmentioning
confidence: 99%
“…Many of them have achieved state-of-the-art performances on the Ninapro datasets. For example, E2CNN [45] and Yang et al [46] have both achieved over 90% classification accuracy on the Ninapro DB1 dataset. Particular emphasis is given to the comparison with MV-CNN [41], as it has been extensively evaluated on diverse public datasets and demonstrated a consistently high accuracy over all datasets.…”
Section: Compared Methodsmentioning
confidence: 99%
“…An important bioelectrical indication during muscle contraction, surface electromyography (sEMG) signals are the subject of research’s classification efforts, with a focus on their potential usefulness in controlling prosthetic limbs for the upper limbs 27 . They used an E2CNN, an efficient concatenated convolutional neural network optimized for fast response and real-time performance, to achieve these goals.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Qureshi, M. F. et al [ 23 ] generated Mel spectrograms based on a 6-channel sEMG dataset from 8 intact subjects performing 11 gesture motions, repeated daily for 7 days, and proposed a CNN model for the classification of Mel Spectrograms, achieving a recognition accuracy of 99.42%. Qureshi, M. F. et al [ 24 ] generated Log-Mel Spectrograms based on a 10-channel sEMG dataset from intact subjects performing 10 gesture motions and proposed an E2CNN model for the classification of Log-Mel Spectrograms, achieving a recognition accuracy of 91.27%. Zhang et al [ 25 ] generated Hilbert graphs from 10-channel sEMG of 52 gesture actions by the Hilbert transform and proposed a dual-view multi-scale convolutional neural network for Hilbert graph classification with recognition accuracy of 86.72%.…”
Section: Introductionmentioning
confidence: 99%