2021
DOI: 10.1088/1741-2552/ac1d36
|View full text |Cite
|
Sign up to set email alerts
|

Distinguishable spatial-spectral feature learning neural network framework for motor imagery-based brain–computer interface

Abstract: Objective. Spatial and spectral features extracted from electroencephalogram (EEG) are critical for the classification of motor imagery (MI) tasks. As prevalently used methods, the common spatial pattern (CSP) and filter bank CSP (FBCSP) can effectively extract spatial-spectral features from MI-related EEG. To further improve the separability of the CSP features, we proposed a distinguishable spatial-spectral feature learning neural network (DSSFLNN) framework for MI-based brain–computer interfaces (BCIs) in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…This indicates that FB-CGANet can yield superior and more stable performance than compared methods. Also, compared with recent methods, the cross-validation result of FB-CGANet is higher than that of the Enhanced Multimodal Fusion method with mean accuracy of 83.15% [32] and the distinguishable spatial-spectral feature learning neural network method with mean kappa of 0.70 [33] but slightly lower than the 10-fold cross validation result of the CNN-LSTM hybrid method (95.65%) which used window slicing on both EOG and EEG data for data augmentation [34].…”
Section: -Fold Cross Validation On Sessionmentioning
confidence: 84%
“…This indicates that FB-CGANet can yield superior and more stable performance than compared methods. Also, compared with recent methods, the cross-validation result of FB-CGANet is higher than that of the Enhanced Multimodal Fusion method with mean accuracy of 83.15% [32] and the distinguishable spatial-spectral feature learning neural network method with mean kappa of 0.70 [33] but slightly lower than the 10-fold cross validation result of the CNN-LSTM hybrid method (95.65%) which used window slicing on both EOG and EEG data for data augmentation [34].…”
Section: -Fold Cross Validation On Sessionmentioning
confidence: 84%
“…Essentially, the attention mechanism enables the model to focus on the key information, providing an important way to capture more reliable features [54]. Recently, as the attention mechanism has been widely used in the MI-EEG classification tasks [25,55], for example, Liu et al [55] used an attention mechanism on the extracted CSP-wise features and class-wise features to further improve the MI classification accuracy. In our work, the function of the ECA module is to recalibrate features by explicitly modeling the interdependencies between feature channels, and it can be easily applied in the existed CNN model [39,55].…”
Section: Overall Classification Results and Comparisonmentioning
confidence: 99%
“…Recently, as the attention mechanism has been widely used in the MI-EEG classification tasks [25,55], for example, Liu et al [55] used an attention mechanism on the extracted CSP-wise features and class-wise features to further improve the MI classification accuracy. In our work, the function of the ECA module is to recalibrate features by explicitly modeling the interdependencies between feature channels, and it can be easily applied in the existed CNN model [39,55]. Therefore, ECA assign higher weights adaptively on the spectral-temporal features to obtain more discriminative features.…”
Section: Overall Classification Results and Comparisonmentioning
confidence: 99%
“…The SE module in our method can re-calibrate the data by explicitly modeling interdependencies between EEG channels [23]. As shown in Fig.…”
Section: Discussionmentioning
confidence: 99%
“…The principle of CSP is to seek a spatial projection matrix to maximize the covariance of one class in the EEG whilst simultaneously minimizing the covariance of the other class in the EEG (see equation ( 2)) [22]. For multi-class MI tasks, the one-versus-rest CSP (OVR-CSP) considers one task as one class and the remaining tasks as the other class [23]. Assume the 𝑋 , ∈ 𝑅 × denotes the j-th EEG trial belonging to i-th class, where C and T are the number of channels and timepoints, respectively.…”
Section: A Common Spatial Pattern Features Extraction With Retaining ...mentioning
confidence: 99%