Imagined speech has recently become an important neuro-paradigm in the field of brain-computer interface (BCI) research. Electroencephalogram (EEG) recordings during imagined speech production are difficult to decode accurately, due to factors such as weak neural correlates and spatial specificity, and signal noise during the recording process. In this study, a dataset of imagined speech recordings obtained during production of eleven different units of imagined speech is used to investigate the relative effects of different features on classification accuracy. Three distinct feature-sets are computed from the data: a linear feature-set, a non-linear feature-set, and a feature-set comprised only of mel frequency cepstral coefficients (MFCC). Each featureset is used to train a decision tree classifier and a Support Vector Machine classifier. The results indicate that the use of MFCC features provides greater discrimination of imagined speech EEG recordings in comparison with the other features evaluated, and that phonological differences between imagined words can serve as an aid to classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.