2006
DOI: 10.1093/cercor/bhl024
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: Viewing a speaker's articulatory movements substantially improves a listener's ability to understand spoken words, especially under noisy environmental conditions. It has been claimed that this gain is most pronounced when auditory input is weakest, an effect that has been related to a well-known principle of multisensory integration--"inverse effectiveness." In keeping with the predictions of this principle, the present study showed substantial gain in multisensory speech enhancement at even the lowest signal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

60
502
10

Year Published

2008
2008
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 552 publications
(572 citation statements)
references
References 31 publications
60
502
10
Order By: Relevance
“…When some event evokes simultaneous signals from two or more modalities, attention is more quickly and more accurately directed towards the source [33], and the experience of the event itself is enhanced [32]. The perceptual process that combines and creates coherence out of different sensory inputs is referred to as multisensory integration.…”
Section: Temporal Integration and Quality Distortionmentioning
confidence: 99%
“…When some event evokes simultaneous signals from two or more modalities, attention is more quickly and more accurately directed towards the source [33], and the experience of the event itself is enhanced [32]. The perceptual process that combines and creates coherence out of different sensory inputs is referred to as multisensory integration.…”
Section: Temporal Integration and Quality Distortionmentioning
confidence: 99%
“…When the acoustic speech signal is degraded by the addition of noise, the presence of dynamic visual speech information improves intelligibility (Sumby & Pollack, 1954). The perceptual significance of the visual speech information thus becomes increased when the auditory speech information is degraded (Erber, 1969;O'Neill, 1954, Ross, Saint-Amour, Leavitt, Javitt & Foxe, 2007.…”
Section: Nih Public Accessmentioning
confidence: 99%
“…When the acoustic speech signal is degraded by the addition of noise, the presence of dynamic visual speech information improves intelligibility (Sumby & Pollack, 1954). The perceptual significance of the visual speech information thus becomes increased when the auditory speech information is degraded (Erber, 1969;O'Neill, 1954, Ross, Saint-Amour, Leavitt, Javitt & Foxe, 2007. Manipulating the intelligibility of speech by the presence of acoustic noise has been shown to alter the spatial distribution and the duration of fixations during audiovisual speech perception tasks (Buchan et al 2007, Vatikiotis-Bateson et al, 1998.…”
Section: Introductionmentioning
confidence: 99%
“…Audio-Visual speech recognition concept was introduced based on observation regarding human capability of bimodal interpretation [1]. At normal environment, audio interpretation supersedes visual, however visual interpretation priority increases upon attempts to understand spoken word under high noise environment [2].…”
Section: Introductionmentioning
confidence: 99%
“…In visual speech recognition, various features such as movement of lips, facial motion and discrete body languages can be used to assist in speech interpretation [2]. Out of all these speech features an estimated of 80% usable features for classification originates from lips movement [3].…”
Section: Introductionmentioning
confidence: 99%