2015
DOI: 10.1016/j.neuropsychologia.2015.06.025
|View full text |Cite
|
Sign up to set email alerts
|

The contribution of dynamic visual cues to audiovisual speech perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 74 publications
0
8
0
Order By: Relevance
“…Research on audiovisual speech perception in aging is also relevant to this review, since the temporal relationships between auditory and visual speech cues ( Chandrasekaran et al, 2009 )facilitate both auditory speech detection ( Grant and Seitz, 2000 ) and recognition ( ten Oever et al, 2013 ; Jaekl et al, 2015 ). Visual facilitation of auditory speech detection is reduced in older adults ( Tye-Murray et al, 2011 ).…”
Section: Age-related Changes In Audiovisual Temporal Perceptionmentioning
confidence: 99%
“…Research on audiovisual speech perception in aging is also relevant to this review, since the temporal relationships between auditory and visual speech cues ( Chandrasekaran et al, 2009 )facilitate both auditory speech detection ( Grant and Seitz, 2000 ) and recognition ( ten Oever et al, 2013 ; Jaekl et al, 2015 ). Visual facilitation of auditory speech detection is reduced in older adults ( Tye-Murray et al, 2011 ).…”
Section: Age-related Changes In Audiovisual Temporal Perceptionmentioning
confidence: 99%
“…Forty years ago, in 1976, Harry McGurk and John McDonald discovered that by dubbing an auditory syllable (e.g., /ba/, hereafter the auditory component of a stimulus will be specified between slashes) with a different visual syllable (e.g., [ga], hereafter the visual component of a stimulus will be specified between brackets), the resulting auditory percept could be dramatically altered into a completely different syllable (e.g., ‘da’; see Massaro & Stork, ; for a description of how this discovery was made, and Yonovitz et al ., ; for a similar, independently achieved contemporary finding). This effect pushed the boundaries of audiovisual (AV) speech perception and multisensory integration by demonstrating that the influence of visual information on auditory speech perception goes beyond being a complement to the acoustic signal when it is degraded (e.g., Sumby & Pollack, ; Ross et al ., ; Jaekl et al ., ). Since then, the McGurk effect has been used in hundreds of studies to address the behavioral expression and physiological expression of multisensory integration in general and for AV speech integration in particular (Tiippana et al ., ; Alsius et al ., , ; Skipper et al ., ; van Wassenhove et al ., ; Bernstein et al ., ; Andersen et al ., ; Munhall et al ., ; Nahorna et al ., , ; Festa et al ., ).…”
Section: Introductionmentioning
confidence: 97%
“…Thus, measures of task performance under multisensory conditions show that multiple species can take advantage of the often complementary or redundant sensory information available to them in their environment (Bahrick & Lickliter, 2000; Foxe & Simpson, 2002; Gibson, 1969; Hammond-Kenny, Bajo, King, & Nodal, 2016; Stein, London, Wilkinson, & Price, 1996), allowing them to evolve and adapt to novel ecological niches (Karageorgi et al, 2017). In the case of humans, watching lip and facial movements, hand gestures, head nods, facial configurational (Jaekl, Pesquita, Alsius, Munhall, & Soto-Faraco, 2015) and even feeling the breath of a speaker on your skin (Gick & Derrick, 2009) can all provide additional information to an observer trying to understand what a speaker is saying to them (Ma, Zhou, Ross, Foxe, & Parra, 2009; Ross et al, 2011; Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007; Sumby & Pollack, 1954). Even for more basic non-speech stimulus configurations, hearing a sound produced by a visual object is likely to enhance its detectability (Fiebelkorn et al, 2011; Molholm et al, 2002; Van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008).…”
Section: Introductionmentioning
confidence: 99%