2022
DOI: 10.1088/1741-2552/ac4852
|View full text |Cite
|
Sign up to set email alerts
|

FB-CGANet: filter bank channel group attention network for multi-class motor imagery classification

Abstract: Objective. Motor imagery-based brain computer interface (MI-BCI) is one of the most important BCI paradigms and can identify the target limb of subjects from the feature of MI-based Electroencephalography (EEG) signals. Deep learning methods, especially lightweight neural networks, provide an efficient technique for MI decoding, but the performance of lightweight neural networks is still limited and need further improving. This paper aimed to design a novel lightweight neural network for improving the performa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Although it has shown success in image recognition, it does not transfer well in MI decoding. Further investigation on attention mechanism might help facilitate EEG feature representation [24]. In conclusion, the summation operator deals with cross-frequency interactions effectively and efficiently.…”
Section: E Ablation Studiesmentioning
confidence: 95%
See 1 more Smart Citation
“…Although it has shown success in image recognition, it does not transfer well in MI decoding. Further investigation on attention mechanism might help facilitate EEG feature representation [24]. In conclusion, the summation operator deals with cross-frequency interactions effectively and efficiently.…”
Section: E Ablation Studiesmentioning
confidence: 95%
“…EEGNet learns frequency filters entirely by backpropagation, which omits hand-crafted spectral bands that might be beneficial for mining mutual spectral information. More recently, channel group attention was introduced in [24] to deal with filter bank inputs. Different from the perspective of band interaction, their inter-channel attention targets scaling the feature channels to improve the expression ability of representative features in both bands.…”
Section: Introductionmentioning
confidence: 99%
“…Our implementation of FB-Sinc-ShallowNet was based on that of Sinc-ShallowNet, and the structure was modified according to the original work, with similar performance obtained. • FB-CGANet [15]: In FB-CGANet, a CGA method was proposed for efficient feature integration, and a dual-branch lightweight network was designed with a hybrid filter bank structure in both the time and frequency domain. Since this work described the details of the network structure, we implemented it in the PyTorch framework, and comparable performance was found on the BCIC IV IIa dataset.…”
Section: Compared Methodsmentioning
confidence: 99%
“…The implementation of SE, ECA, and SA was based on the PyTorch version of SE 7 , the project created by Wang et al 8 , and codes released by Zhang et al 9 , respectively. CGA was implemented with the PyTorch framework according to the report of Chen et al [15]. In SA and CGA, the number of channel groups was set as 4 to group features by corresponding frequency bands for allocating attention via information of four motor rhythms.…”
Section: The Influence Of Feature Selection By Channel Self-attentionmentioning
confidence: 99%
See 1 more Smart Citation