2005
DOI: 10.1016/j.cub.2005.03.046
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual Integration of Speech Falters under High Attention Demands

Abstract: One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

35
333
11
2

Year Published

2007
2007
2021
2021

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 339 publications
(382 citation statements)
references
References 29 publications
35
333
11
2
Order By: Relevance
“…This could help reduce effective set size, and thus perceptual load, allowing audio-visual integration to be more effective. This explanation is indirectly supported by previous findings indicating that cross-modal integration under highperceptual-load conditions is mediated by a serial, attentive process [38,39,52], and therefore should be more effective in conditions where there are fewer possible auditory-visual associations. Audio-visual coincidence selection can be enabled in a variety of ways, such as using sparse visual displays (as in many multi-sensory enhancement experiments), or by the saliency and temporal informativeness of the accessory acoustic cue [36].…”
Section: Discussionsupporting
confidence: 71%
See 1 more Smart Citation
“…This could help reduce effective set size, and thus perceptual load, allowing audio-visual integration to be more effective. This explanation is indirectly supported by previous findings indicating that cross-modal integration under highperceptual-load conditions is mediated by a serial, attentive process [38,39,52], and therefore should be more effective in conditions where there are fewer possible auditory-visual associations. Audio-visual coincidence selection can be enabled in a variety of ways, such as using sparse visual displays (as in many multi-sensory enhancement experiments), or by the saliency and temporal informativeness of the accessory acoustic cue [36].…”
Section: Discussionsupporting
confidence: 71%
“…Moreover, paradigms where perceptual load is high (i.e. when the matching between sound and visual events must be extracted from complex, dynamically changing events) have typically failed to demonstrate cross-modal enhancement in search tasks [38,39].…”
Section: Introductionmentioning
confidence: 99%
“…This may be critical to the low-level interactions between the two modalities reported by these studies. Many studies have reported that attention is a prerequisite for observing multisensory integration (Alsius, Navarra, Campbell, & Soto-Faraco, 2005;Alsius, Navarra, & Soto-Faraco, 2007;Fujisaki, Koene, Arnold, Johnston, & Nishida, 2006;Talsma, Doty, & Woldorff, 2007), although others report findings to the contrary ( Van der Burg et al, 2008a;and see Talsma, Senkowski, Soto-Faraco, & Woldorff, 2010, for a recent review regarding the role of attention in multisensory integration).…”
Section: Discussionmentioning
confidence: 99%
“…One might, as an alternative, have used a more demanding on-line task that allows one to keep track of performance during the exposure phase. Participants might for example track a concurrent visual stimulus while being exposed to the lipread information, as this is relatively easy to measure (see e.g., Alsius, Navarra, Campbell, & Soto-Faraco, 2005). However, a disadvantage of this method is that the visual tracking task as such may interfere with lipreading, so there is interference at the sensory level rather than at the level at which phonetic recalibration occurs.…”
Section: -Discussionmentioning
confidence: 99%