2011
DOI: 10.3389/fpsyg.2011.00130
|View full text |Cite
|
Sign up to set email alerts
|

Linking Speech Perception and Neurophysiology: Speech Decoding Guided by Cascaded Oscillators Locked to the Input Rhythm

Abstract: The premise of this study is that current models of speech perception, which are driven by acoustic features alone, are incomplete, and that the role of decoding time during memory access must be incorporated to account for the patterns of observed recognition phenomena. It is postulated that decoding time is governed by a cascade of neuronal oscillators, which guide template-matching operations at a hierarchy of temporal scales. Cascaded cortical oscillations in the theta, beta, and gamma frequency bands are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

31
392
1
4

Year Published

2012
2012
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 325 publications
(428 citation statements)
references
References 52 publications
(97 reference statements)
31
392
1
4
Order By: Relevance
“…This reflects the hierarchy of time scales present in the acoustic speech signal; which contains both fast events 20 msec such as the onset and offset of vocalic voicing and the broadband burst after the release of the oral cavity occlusion, and slower~100 msec modulations in the envelope of the speech sound and smooth formant transitions. The online continuous processing presented here opens up the possibility for exploring different temporal scales either nested (Ghitza, 2011) or in parallel.…”
Section: Discussionmentioning
confidence: 99%
“…This reflects the hierarchy of time scales present in the acoustic speech signal; which contains both fast events 20 msec such as the onset and offset of vocalic voicing and the broadband burst after the release of the oral cavity occlusion, and slower~100 msec modulations in the envelope of the speech sound and smooth formant transitions. The online continuous processing presented here opens up the possibility for exploring different temporal scales either nested (Ghitza, 2011) or in parallel.…”
Section: Discussionmentioning
confidence: 99%
“…One possibility by which entrainment can arise is that slow envelope modulations prominent in many natural sounds directly imprint on periodic excitability changes in cortical networks and effectively provide an intrinsic copy of the slow stimulus dynamics (Howard and Poeppel, 2010;Ding and Simon, 2012;Zion Golumbic et al, 2012). However, it could also well be that entrainment is induced by finer-grained (e.g., spectral) features of acoustic stimuli or higher-order properties of the temporal modulation spectrum, even in the absence of clearly visible envelope modulations (Ghitza, 2011). The causal mechanisms behind the entrainment of cortical oscillations clearly require additional investigation in future studies.…”
Section: Entrainment Of Oscillations To Dynamic Environmentsmentioning
confidence: 96%
“…studies found reduced auditory entrainment in alpha compared with theta oscillations (Luo and Poeppel, 2007;Schroeder et al, 2008;Ding and Simon, 2012;Ng et al, 2012). Furthermore, although alpha signals may shape external attentional control on auditory cortex (Kerlin et al, 2010), the alpha rhythm of the auditory cortex itself does not seem crucial for the temporal hierarchy of oscillations implied in auditory scene analysis, which likely reflects the prominent timescales of natural sounds and speech (Ghitza, 2011). Future work is required to fully elucidate whether the differential importance of theta and alpha signals reflects intrinsic properties of either sensory systems or whether additional attributes of the oscillatory state (e.g., entrained vs spontaneous) shape the impact of theta and alpha phase for stimulus detection.…”
Section: Role Of Oscillatory State For Perceptionmentioning
confidence: 99%
“…These observations have been related to the neural encoding of speech by a family of "multi-time resolution models" of speech processing developed in the field of auditory neuroscience (e.g., Poeppel 2003;Hickok and Poeppel 2007;Ghitza and Greenberg 2009;Ghitza 2011). Multi-time resolution models of speech processing suggest that different rates of amplitude modulation in the envelope are encoded by neuronal oscillations at corresponding temporal rates.…”
Section: The Brain: Oscillatory Neuronal Entrainment and Speech Encodingmentioning
confidence: 99%