2022
DOI: 10.1038/s41467-021-27725-3
|View full text |Cite
|
Sign up to set email alerts
|

Imagined speech can be decoded from low- and cross-frequency intracranial EEG features

Abstract: Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
64
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 93 publications
(82 citation statements)
references
References 79 publications
5
64
0
Order By: Relevance
“…In recent years, there have been important technological and methodological advancements in perceived and imagined speech decoding (Martin et al, 2018;Panachakel & Ramakrishnan, 2021). Recent works focus on the classification of vowels (M. S. Mahmud et al, 2020; N. T. Duc & B. Lee, 2020), syllables (Archila-Meléndez et al, 2018;Brandmeyer et al, 2013;Correia et al, 2015), words (Ossmy et al, 2015;Proix et al, 2022;Vorontsova et al, 2021) and complete sentences (Chakrabarti et al, 2015;Zhang et al, 2012), distinguishing stimuli mainly at the semantic level. The most advanced online decoding techniques rely heavily on the articulatory representation of syllables and words in the motor and supplementary motor cortices (Anumanchipalli et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In recent years, there have been important technological and methodological advancements in perceived and imagined speech decoding (Martin et al, 2018;Panachakel & Ramakrishnan, 2021). Recent works focus on the classification of vowels (M. S. Mahmud et al, 2020; N. T. Duc & B. Lee, 2020), syllables (Archila-Meléndez et al, 2018;Brandmeyer et al, 2013;Correia et al, 2015), words (Ossmy et al, 2015;Proix et al, 2022;Vorontsova et al, 2021) and complete sentences (Chakrabarti et al, 2015;Zhang et al, 2012), distinguishing stimuli mainly at the semantic level. The most advanced online decoding techniques rely heavily on the articulatory representation of syllables and words in the motor and supplementary motor cortices (Anumanchipalli et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…However, this approach can only be applied to patients with intact motor commands, which represent a minority of the patients with speech impairment (Guenther et al, 2009;Wilson et al, 2020). Thus, other decoding strategies that rely on the brain regions that encode speech are needed (Proix et al, 2022). Here, we decoded the acoustic stimuli exploiting 29 different speech-encoding cortical areas spanning the entire brain.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-electrode LFP recordings from one or more brain areas can be especially informative. For example, multi-channel LFPs may be used to decode visual stimuli [12,13], auditory stimuli [13,143], speech production [52,112,122], acute pain onset and intensity [165], and even semantic representations [39,97].…”
Section: Single-channel Neural Signalsmentioning
confidence: 99%
“…For example, in the domains of speech comprehension and speech production, temporal information is a primary indicator of meaning. Studying neural responses to speech therefore requires considering how neural correlates of speech unfold over time [52,122].…”
Section: Static Versus Dynamic Measures Of Brain Activitymentioning
confidence: 99%