2015
DOI: 10.1080/23273798.2015.1101145
|View full text |Cite
|
Sign up to set email alerts
|

Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex

Abstract: In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
14
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 93 publications
3
14
0
Order By: Relevance
“…In general, HG band responses were enhanced compared to baseline and LF responses were decreased compared to baseline and the dynamic visual speech information increased the HG response and reduced the LF response (enhanced decrease). Enhanced HG band responses and decreased LF responses compared to baseline have been previously reported to audiovisual speech with ECoG (Rhone et al, 2016;Schepers et al, 2015;Uno et al, 2015) and seem to be a general response profile in population electrophysiological responses to sensory stimulation (e.g., Scheeringa, Koopmans, van Mourik, Jensen, & Norris, 2016;Siegel, Donner, Oostenveld, Fries, & Engel, 2008). Magnitude changes in neural brain responses reported here, particularly in the broad HG range, which has been linked to neuronal spiking activity (Ray & Maunsell, 2011), might underlie BOLD response changes seen in previous fMRI studies (Mukamel et al, 2005).…”
Section: Discussionsupporting
confidence: 72%
See 2 more Smart Citations
“…In general, HG band responses were enhanced compared to baseline and LF responses were decreased compared to baseline and the dynamic visual speech information increased the HG response and reduced the LF response (enhanced decrease). Enhanced HG band responses and decreased LF responses compared to baseline have been previously reported to audiovisual speech with ECoG (Rhone et al, 2016;Schepers et al, 2015;Uno et al, 2015) and seem to be a general response profile in population electrophysiological responses to sensory stimulation (e.g., Scheeringa, Koopmans, van Mourik, Jensen, & Norris, 2016;Siegel, Donner, Oostenveld, Fries, & Engel, 2008). Magnitude changes in neural brain responses reported here, particularly in the broad HG range, which has been linked to neuronal spiking activity (Ray & Maunsell, 2011), might underlie BOLD response changes seen in previous fMRI studies (Mukamel et al, 2005).…”
Section: Discussionsupporting
confidence: 72%
“…Most studies on audiovisual speech perception so far compared average neural response magnitude and latency differences between unimodal and multimodal experimental conditions (Besle et al., ; Rhone et al., ; Schepers, Yoshor, & Beauchamp, ; van Wassenhove, Grant, & Poeppel, ) but did not focus on the representation of dynamic speech characteristics of the different modalities (e.g., using stimuli with continuous sentence analysis or using different perceptual modalities such as audio and video streams). The effects of visual speech information on auditory speech tracking have been investigated with EEG in auditory noise‐free (Crosse, Butler, & Lalor, ) and auditory noisy (Crosse, Di Liberto, & Lalor, ) conditions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A critical brain area for multisensory speech perception is the posterior superior temporal gyrus and sulcus (pSTG), the location of human auditory association cortex (Moerel et al, 2014;Leaver and Rauschecker, 2016). The belt and parabelt areas in pSTG are selective for both the complex acoustic-phonetic features that comprise auditory speech (Belin et al, 2000;Formisano et al, 2008;Mesgarani et al, 2014) and the mouth movements that comprise visual speech (Beauchamp et al, 2004;Bernstein et al, 2011;Rhone et al, 2016;Ozker et al, 2017;Zhu and Beauchamp, 2017;Ozker et al, 2018b;Rennig and Beauchamp, 2018;Beauchamp, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…In non-human primates, recordings from single neurons in pSTG/S respond to both auditory and visual social communication signals (3)(4)(5). In humans, small populations of neurons in pSTG/S recorded with intracranial electrodes respond to both auditory and visual speech (6,7). While the idea that pSTG/S integrates visual speech information with noisy auditory speech in the service of comprehension seems reasonable, it is supported by only limited empirical evidence.…”
Section: Introductionmentioning
confidence: 99%