ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747488
|View full text |Cite
|
Sign up to set email alerts
|

A Channel Attention Based MLP-Mixer Network for Motor Imagery Decoding With EEG

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…With the rapid advancement of deep learning, methods including but not limited to convolution neural networks (CNNs) [6]- [8], recurrent neural networks (RNNs) [9]- [11], multilayer perceptrons (MLPs) [12], [13], and graph convolution networks (GCNs) [14]- [20] have demonstrated marked effectiveness in EEG classification tasks across Manuscript received September 12, 2023; This study was funded by the Key Program of National Natural Science Foundation of China (grant no. 61731003) and the Funds for National Natural Science Foundation of China (grant no.…”
Section: Introductionmentioning
confidence: 99%
“…With the rapid advancement of deep learning, methods including but not limited to convolution neural networks (CNNs) [6]- [8], recurrent neural networks (RNNs) [9]- [11], multilayer perceptrons (MLPs) [12], [13], and graph convolution networks (GCNs) [14]- [20] have demonstrated marked effectiveness in EEG classification tasks across Manuscript received September 12, 2023; This study was funded by the Key Program of National Natural Science Foundation of China (grant no. 61731003) and the Funds for National Natural Science Foundation of China (grant no.…”
Section: Introductionmentioning
confidence: 99%
“…EEGNet and ShallowConvNet utilize convolutional layers to extract spatial and temporal patterns from EEG data. However, EEGNet may need help with capturing long-range temporal dependencies [43], while ShallowConvNet may not be as effective as deeper architectures in capturing complex patterns. DeepConvNet excels at capturing spatial and temporal patterns but requires much training data to avoid overfitting [44].…”
Section: Introductionmentioning
confidence: 99%
“…Hou and Jia et al [6] applied long short-term memory (LSTM) to extract features and used graph convolution networks (GCN) to model topological structure. He et al [7] applied channel attention to multi-layer perceptron to adaptively learn the importance of each channel. Specific to the task of EEG emotion recognition, many researchers have also proposed some approaches.…”
Section: Introductionmentioning
confidence: 99%