2013
DOI: 10.1371/journal.pone.0068959
|View full text |Cite
|
Sign up to set email alerts
|

An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex

Abstract: Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
49
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(59 citation statements)
references
References 43 publications
10
49
0
Order By: Relevance
“…In contrast, we hypothesized that experiments emphasizing validation of AV input over conflict would consistently recruit regions more proximal to sensory areas as compared to frontal and parietal regions hypothesized for processing conflicting AV speech, where sensory areas were defined in terms of relative location to A1 or V1 as compared to conflicting AV speech. This hypothesis was supported by previous studies that have shown increased activity for congruent AV speech in auditory areas (Okada et al, 2013; van Atteveldt et al, 2004; van Atteveldt et al, 2007), and increased activity for non-native, second language processing of congruent AV speech in visual areas (Barros-Loscertales et al, 2013). …”
Section: Introductionsupporting
confidence: 80%
“…In contrast, we hypothesized that experiments emphasizing validation of AV input over conflict would consistently recruit regions more proximal to sensory areas as compared to frontal and parietal regions hypothesized for processing conflicting AV speech, where sensory areas were defined in terms of relative location to A1 or V1 as compared to conflicting AV speech. This hypothesis was supported by previous studies that have shown increased activity for congruent AV speech in auditory areas (Okada et al, 2013; van Atteveldt et al, 2004; van Atteveldt et al, 2007), and increased activity for non-native, second language processing of congruent AV speech in visual areas (Barros-Loscertales et al, 2013). …”
Section: Introductionsupporting
confidence: 80%
“…On this account, the history of the pairing of visual and acoustic features and the neural activation associated with each, should result in ensembles of visual and auditory neurons that are responsive to the co-occurrence of these events. The operation of such a mechanism is consistent with finding of Okada et al (2013) that activity in the auditory cortex is up-regulated (primed) by the presentation of visual speech. Given this, the presentation of visual speech form would facilitate perceptual processing of the matched speech stimuli.…”
Section: Discussionsupporting
confidence: 89%
“…This was examined by contrasting stimuli where the auditory and visual speech matched with those where they did not. This manipulation was based on demonstrations that there is a functional correspondence between lip and mouth movements and particular speech spectral properties (Berthommier, 2004;Girin, Schwartz, & Feng, 2001) and that seeing visual speech significantly up-regulates the activity of auditory cortex compared to auditory speech alone (Okada, Venezia, Matchin, Saberi, & Hickok, 2013). Combining these two observations leads to the prediction that visual speech form will facilitate decisions based on the processing of its auditory counterpart.…”
Section: Introductionmentioning
confidence: 99%
“…For example, audiovisual speech (which should be a more predictable stimulus than auditory-only speech) results in increased activity in primary and secondary auditory cortices (Okada, Venezia, Matchin, Saberi, & Hickok, 2013), consistent with increased activity in auditory cortex during silent lipreading (Calvert et al, 1997). Audiovisual interactions in auditory cortex appear to precede those in posterior STS (Möttönen, Schürmann, & Sams, 2004).…”
Section: Neural Mechanisms Supporting Audiovisual Speech Processingmentioning
confidence: 81%