2019 IEEE 16th India Council International Conference (INDICON) 2019
DOI: 10.1109/indicon47234.2019.9028925
|View full text |Cite
|
Sign up to set email alerts
|

Decoding Imagined Speech using Wavelet Features and Deep Neural Networks

Abstract: This paper proposes a novel approach that uses deep neural networks for classifying imagined speech, significantly increasing the classification accuracy. The proposed approach employs only the EEG channels over specific areas of the brain for classification, and derives distinct feature vectors from each of those channels. This gives us more data to train a classifier, enabling us to use deep learning approaches. Wavelet and temporal domain features are extracted from each channel. The final class label of ea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 30 publications
(24 citation statements)
references
References 16 publications
0
23
0
Order By: Relevance
“…Taking into account the presented works, the average recognition accuracy for the binary classification of words or syllables approached 72% with a maximum accuracy of 85.57% [7][8][9]11,14,15,21,23,24]. For four words, according to the papers presented, the classification accuracy averaged 59%, while the maximum accuracy reached 99.38% [13,15,23,25].…”
Section: Introductionmentioning
confidence: 75%
See 1 more Smart Citation
“…Taking into account the presented works, the average recognition accuracy for the binary classification of words or syllables approached 72% with a maximum accuracy of 85.57% [7][8][9]11,14,15,21,23,24]. For four words, according to the papers presented, the classification accuracy averaged 59%, while the maximum accuracy reached 99.38% [13,15,23,25].…”
Section: Introductionmentioning
confidence: 75%
“…In 2019, scientists from India published a method for recognizing seven syllables, /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, and /n/, and four words, pat, pot, knew, and gnaw, on data from 11 EEG channels located in areas of the cerebral cortex involved in speech presentation [14]. Due to the high correlation between different channels, each channel was used as a separate object for the neural network training.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to García et al ( 2012 ), EEG channels are manually chosen in Panachakel et al ( 2019 ). Specifically, the following 11 EEG channels are chosen based on the significance of the cortical region they cover in language processing (Marslen-Wilson and Tyler, 2007 ; Alderson-Day et al, 2015 ):…”
Section: Preprocessingmentioning
confidence: 99%
“…In another work by Panachakel et al ( 2019 ), a combination of time and wavelet domain features was employed. Corresponding to each trial, EEG signal of 3-s duration was decomposed into 7 levels using db4 wavelet and five statistical features, namely, root mean square, variance, kurtosis, skewness, and fifth order moment were extracted from the last three detail coefficients and from the last approximation coefficient.…”
Section: Feature Extraction and Classificationmentioning
confidence: 99%
“…In most of the works in the literature such as [3]- [6], [8]- [18], [20]- [22], individual EEG channels are considered separately for extracting features. In these works except [22], [26], wavelet domain features, mel-frequency cepstral coefficients (MFCCs) and/or temporal domain features are extracted from each channel and are concatenated to obtain the feature vector for each trial. In [22], [26], the features extracted from individual channels are considered as distinct data vector and the decisions of the classifier for each channel are combined to obtain the final classification result.…”
Section: Introductionmentioning
confidence: 99%