2018
DOI: 10.1007/978-3-030-04497-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Decomposition for Imagined Speech Discrimination in EEG

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…In the time domain, the feature extraction process is often done through statistical analysis, obtaining statistical features such as standard deviation (SD), root mean square (RMS), mean, variance, sum, maximum, minimum, Hjorth parameters, sample entropy, autoregressive (AR) coefficients, among others (Riaz et al, 2014 ; Iqbal et al, 2016 ; AlSaleh et al, 2018 ; Cooney et al, 2018 ; Paul et al, 2018 ; Lee et al, 2019 ). On the other hand, the most common methods used to extract features from the frequency domain include Mel Frequency Cepstral Coefficients (MFCC), Short-Time Fourier transform (STFT), Fast Fourier Transform (FFT), Wavelet Transform (WT), Discrete Wavelet Transform (DWT), and Continuous Wavelet Transform (CWT) (Riaz et al, 2014 ; Salinas, 2017 ; Cooney et al, 2018 ; Garćıa-Salinas et al, 2018 ; Panachakel et al, 2019 ; Pan et al, 2021 ). Additionally, there is a method called Bag-of-Features (BoF) proposed by Lin et al ( 2012 ), in which a time-frequency analysis is done to convert the signal into words using Sumbolic Arregate approXimation (SAX).…”
Section: Feature Extraction Techniques In Literaturementioning
confidence: 99%
“…In the time domain, the feature extraction process is often done through statistical analysis, obtaining statistical features such as standard deviation (SD), root mean square (RMS), mean, variance, sum, maximum, minimum, Hjorth parameters, sample entropy, autoregressive (AR) coefficients, among others (Riaz et al, 2014 ; Iqbal et al, 2016 ; AlSaleh et al, 2018 ; Cooney et al, 2018 ; Paul et al, 2018 ; Lee et al, 2019 ). On the other hand, the most common methods used to extract features from the frequency domain include Mel Frequency Cepstral Coefficients (MFCC), Short-Time Fourier transform (STFT), Fast Fourier Transform (FFT), Wavelet Transform (WT), Discrete Wavelet Transform (DWT), and Continuous Wavelet Transform (CWT) (Riaz et al, 2014 ; Salinas, 2017 ; Cooney et al, 2018 ; Garćıa-Salinas et al, 2018 ; Panachakel et al, 2019 ; Pan et al, 2021 ). Additionally, there is a method called Bag-of-Features (BoF) proposed by Lin et al ( 2012 ), in which a time-frequency analysis is done to convert the signal into words using Sumbolic Arregate approXimation (SAX).…”
Section: Feature Extraction Techniques In Literaturementioning
confidence: 99%
“…[24] classi ed vowels using random forest a er down sampling the data and reported an accuracy of 22.32%. Discrimination of imagined speech in EEG was proposed in [45] using Tensor decomposition. Hilbert transform and Hilbert spectrum methods were used by [46], [47], and [48] to decode imagined speech using EEG signal, both studies used two different syllables during the experiment but with four and seven subjects respectively.…”
Section: Decoding Of Imagined Speech Based On Eegmentioning
confidence: 99%
“…Hilbert transform and Hilbert spectrum methods were used by [46], [47], and [48] to decode imagined speech using EEG signal, both studies used two different syllables during the experiment but with four and seven subjects respectively. Wavelet transform was used for feature extraction with alternated least squares approximation and down sample the data for vowel classi cation, they obtained the accuracy of 59.70% using SVM classi er [45]. Multi-class classi cation of words was proposed in [49] using connectivity features.…”
Section: Decoding Of Imagined Speech Based On Eegmentioning
confidence: 99%
“…The dataset authors themselves [19] provided the initial classification, where they down sampled the data and used a RF classifier to get an accuracy of 22.32% for vowels. Garcia-Salinas et al [20] also down sampled the data and used wavelet transform with alternated least squares approximation with a linear SVM classifier and got an average inter-subject accuracy of 59.70% for words on the first three subjects. Cooney et al [21] used a deep and a shallow CNN to classify word-pairs from the dataset and used independent component analysis with Hessian approximation to achieve an average accuracy of 62.37% and 60.88% for the deep and shallow CNNs, respectively.…”
Section: Recent Approachesmentioning
confidence: 99%