2007
DOI: 10.1098/rstb.2007.2155
|View full text |Cite
|
Sign up to set email alerts
|

The processing of audio-visual speech: empirical and neural bases

Abstract: In this selective review, I outline a number of ways in which seeing the talker affects auditory perception of speech, including, but not confined to, the McGurk effect. To date, studies suggest that all linguistic levels are susceptible to visual influence, and that two main modes of processing can be described: a complementary mode, whereby vision provides information more efficiently than hearing for some under-specified parts of the speech stream, and a correlated mode, whereby vision partially duplicates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

14
213
0
4

Year Published

2011
2011
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 240 publications
(242 citation statements)
references
References 65 publications
14
213
0
4
Order By: Relevance
“…In the first step, L1 showed a subadditive effect in the bilateral inferior frontal gyrus (IFG, MNI coordinates; 54, 15, 27 and -54, 21, 27; corrected for multiple comparisons using the Monte Carlo simulations in AFNI). This region is commonly associated with multisensory processing, including AV speech processing (Lee & Noppeney, 2011;Calvert 2001;Campbell, 2008). However in the second step, this region did not survive either the max or the minimum criterion.…”
Section: Neural Correlates Of Multisensory Integrationmentioning
confidence: 96%
See 3 more Smart Citations
“…In the first step, L1 showed a subadditive effect in the bilateral inferior frontal gyrus (IFG, MNI coordinates; 54, 15, 27 and -54, 21, 27; corrected for multiple comparisons using the Monte Carlo simulations in AFNI). This region is commonly associated with multisensory processing, including AV speech processing (Lee & Noppeney, 2011;Calvert 2001;Campbell, 2008). However in the second step, this region did not survive either the max or the minimum criterion.…”
Section: Neural Correlates Of Multisensory Integrationmentioning
confidence: 96%
“…Most notably, a considerable body of evidence has associated the posterior part of the superior temporal sulcus (pSTS) with AV integration during language processing (for reviews, see Amedi et al, 2005;Beauchamp, 2005;Campbell, 2008). This cortical region responds to both visual and auditory speech stimuli and, more importantly, it often shows stronger responses when speech stimuli are simultaneously presented in the two sensory modalities (e.g., speech with co-occurring and correlated mouth movements, amongst others, usually meaningful stimuli).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Speech perception is often multimodal (Campbell, 1996(Campbell, , 2008 visual attention] and the suggestion that multimodal information processing is atypical in FXS). Additionally, auditory memory is worse than visual memory in DS (e.g., Marcell & Armstrong, 1982), which indicates that, in at least some tasks, individuals rely more on one modality than another.…”
Section: Speech Perception In Typically Developing Childrenmentioning
confidence: 99%