2012
DOI: 10.1038/ncomms1995
|View full text |Cite
|
Sign up to set email alerts
|

Structured neuronal encoding and decoding of human speech features

Abstract: Human speech sounds are produced through a coordinated movement of structures along the vocal tract. Here we show highly structured neuronal encoding of vowel articulation. In medial-frontal neurons, we observe highly specific tuning to individual vowels, whereas superior temporal gyrus neurons have non-specific, sinusoidally-modulated tuning (analogous to motor cortical directional tuning). At the neuronal population level, a decoding analysis reveals that the underlying structure of vowel encoding reflects t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
63
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 60 publications
(70 citation statements)
references
References 37 publications
(38 reference statements)
4
63
0
Order By: Relevance
“…It is therefore of great interest to use every opportunity to study single-unit responses in awake humans [51]. This approach has already led to some insights into language [52], representation of objects [53], and cognitive control [54]. In addition, singleunit recordings can provide new insights into mental diseases, such as obsessivecompulsive disorders [55].…”
Section: Choice Of Animal Modelmentioning
confidence: 99%
“…It is therefore of great interest to use every opportunity to study single-unit responses in awake humans [51]. This approach has already led to some insights into language [52], representation of objects [53], and cognitive control [54]. In addition, singleunit recordings can provide new insights into mental diseases, such as obsessivecompulsive disorders [55].…”
Section: Choice Of Animal Modelmentioning
confidence: 99%
“…Recently, we used single unit recordings obtained from human subjects during the pronunciation of speech segments, to propose a speech BMI that is based on direct decoding of the phoneme that the user wishes to pronounce (Tankus et al, 2012a) – decoded phonemes can be pre-recorded and played back. The decoder employs neurons recorded mainly from two populations of cells, each of which was found to exhibit a very different type of encoding.…”
Section: Speech Brain–machine Interfacesmentioning
confidence: 99%
“…The tuning is sinusoidal along a dimension representing the highest point of the tongue during articulation, as was determined according to the IPA chart for vowels. This order is natural to the neuronal representation (see (Tankus et al, 2012a, 2012b) for details). Highly accurate prediction of vowel sounds during utterance (93%; chance: 20%) was demonstrated using a new decoder that is based on sparse decomposition for automatic selection of the task-relevant features to decode from (Tankus et al, 2012b), and which is highly suitable for real-time implementation.…”
Section: Speech Brain–machine Interfacesmentioning
confidence: 99%
“…Multiple levels of speech representation have been successfully decoded using intracranial neural signals. These include auditory representations (Guenther et al, 2009; Pasley et al, 2012), consonants and vowels (Pei et al, 2011a,b; Tankus et al, 2012), and words (Kellis et al, 2010). Later, we will review a number of these results as applied to three different levels of speech representation: auditory, phonetic, and articulatory processing.…”
Section: A Neural Systems Approach To Languagementioning
confidence: 99%