2021
DOI: 10.1109/access.2021.3116196
|View full text |Cite
|
Sign up to set email alerts
|

Decoding Imagined Speech From EEG Using Transfer Learning

Abstract: We present a transfer learning based approach for decoding imagined speech from electroencephalogram (EEG). Features are extracted simultaneously from multiple EEG channels, rather than separately from individual channels. This helps in capturing the interrelationships between the cortical regions. To alleviate the problem of lack of enough data for training deep networks, sliding window based data augmentation is performed. Mean phase coherence and magnitude-squared coherence, two popular measures used in EEG… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 52 publications
0
5
0
Order By: Relevance
“…As shown in tables 4 and 5, the proposed cross-task transfer learning model achieves competitive performance accuracy of 82.35% for the target KaraOne model and 89.01% for the target FEIS model for multiclass classification, in comparison to prior studies with other transfer learning models such as Cooney et al [28] crosssubject learning of accuracy of 35.68%, Vorontsova et al [29] cross-domain learning of accuracy of 84.5%, Tamm et al [30] cross-subject learning of accuracy of 24.77%, and Panachakel et al [31] cross-domain learning of accuracy of 79.7%-95.5%. The results show that transfer learning procedures are significant for the generalizability of decoding imagined speech EEG signals, despite the difficulties of comparison across state-ofthe-art transfer learning research where the investigations were with distinct datasets.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 65%
See 1 more Smart Citation
“…As shown in tables 4 and 5, the proposed cross-task transfer learning model achieves competitive performance accuracy of 82.35% for the target KaraOne model and 89.01% for the target FEIS model for multiclass classification, in comparison to prior studies with other transfer learning models such as Cooney et al [28] crosssubject learning of accuracy of 35.68%, Vorontsova et al [29] cross-domain learning of accuracy of 84.5%, Tamm et al [30] cross-subject learning of accuracy of 24.77%, and Panachakel et al [31] cross-domain learning of accuracy of 79.7%-95.5%. The results show that transfer learning procedures are significant for the generalizability of decoding imagined speech EEG signals, despite the difficulties of comparison across state-ofthe-art transfer learning research where the investigations were with distinct datasets.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 65%
“…Tamm et al [30] explored with imagined speech five vowels and six words using the classifier transfer learning approach of inter-subject, yielding a noticeably lower performance. In experiments on intra-subject knowledge transfer by Panachakel et al [31], the ResNet50 transfer learning classifier was applied to the five words and three vowels of imagined speech. The majority of recent research has been focused on the decoding of imagined speech prompts employing cross-subject or crosssession based knowledge sharing.…”
Section: Introductionmentioning
confidence: 99%
“… Cooney et al (2020) used relative wavelet energy features and several CNNs (shallow CNN, deep CNN, and EEGNet) with different hyperparameters to classify two different imagined speech datasets. Panachakel and Ganesan (2021b) used ResNet50 to classify imagined vowels (two classes) and short-long words (three classes) and obtained classification accuracy of 86.28 and 92.8%, respectively. Lee et al (2020) achieved 13 class (12 words/phrases and rest state) classification accuracy of 39.73% using frequency band spectral features and SVM with RBF kernel classifier.…”
Section: Introductionmentioning
confidence: 99%
“…Sarmiento et al [20] used the CNN-based model to classify five English vowels. Panachakel and Ganesan [21] used sliding window data augmentation and TL on the underlying ResNet50 model to classify imagined spoken words and vowels.…”
Section: Introductionmentioning
confidence: 99%