2016
DOI: 10.1007/s41133-016-0001-z
|View full text |Cite
|
Sign up to set email alerts
|

A Communication Paradigm Using Subvocalized Speech: Translating Brain Signals into Speech

Abstract: Recent science and technology studies in neuroscience, rehabilitation, and machine learning have focused attention on the EEG-based brain-computer interface (BCI) as an exciting field of research. Though the primary goal of the BCI has been to restore communication in the severely paralyzed, BCI for speech communication has acquired recognition in a variety of non-medical fields. These fields include silent speech communication, cognitive biometrics, and synthetic telepathy, to name a few. Though potentially a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Although those reports were not exactly matched to the results of classification, these discrepancies of subjective sensory perception might be related to process of imagining speech and classification results. Besides, we have not tried multiclass classification in this study, yet some attempts in multiclass classification of imagined speech have been performed by others [8, 46, 47]. These issues related to intersubject variability and multiclass systems should be considered for our future study to develop more practical and generalized BCI systems using silent speech.…”
Section: Resultsmentioning
confidence: 99%
“…Although those reports were not exactly matched to the results of classification, these discrepancies of subjective sensory perception might be related to process of imagining speech and classification results. Besides, we have not tried multiclass classification in this study, yet some attempts in multiclass classification of imagined speech have been performed by others [8, 46, 47]. These issues related to intersubject variability and multiclass systems should be considered for our future study to develop more practical and generalized BCI systems using silent speech.…”
Section: Resultsmentioning
confidence: 99%
“…Although the area of BCI based speech intent recognition has received increasing attention within the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words [5][6][7][8][9][10][11][12][13]. This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest [5][6][7]; binary syllable classes like /ba/ and /ku/ [8][9][10]14]; a handful of control words like 'up', 'down', 'left', 'right' and 'select' [13] or others like 'water', 'help', 'thanks', 'food', 'stop' [11], Chinese characters [12], etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs).…”
Section: Introductionmentioning
confidence: 99%
“…In speech imagination, also known as Silent-Talk and Silent-Speech, the participants imagine the pronunciation of a particular vowel [2,[11][12][13], syllable [14][15][16][17], or word [18][19][20][21][22] in some defined time intervals. EEG signal during these intervals is processed to determine the imagined word [17,19,[23][24][25].…”
Section: Introductionmentioning
confidence: 99%