2015
DOI: 10.7554/elife.04995
|View full text |Cite
|
Sign up to set email alerts
|

Auditory selective attention is enhanced by a task-irrelevant temporally coherent visual stimulus in human listeners

Abstract: In noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face—an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio–visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships. Per… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
124
2

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 82 publications
(137 citation statements)
references
References 46 publications
11
124
2
Order By: Relevance
“…properties of a visual stimulus can be exploited to detect correspondence between auditory and 33 visual streams (Crosse et al, 2015;Denison et al, 2013;Rahne et al, 2008), can bias the perceptual 34 organisation of a sound scene (Brosch et al, 2015), and can enhance or impair listening performance 35 depending on whether the visual stimulus is temporally coherent with a target or distractor sound 36 stream (Maddox et al, 2015). Together, these behavioural results suggest that temporal coherence 37 between auditory and visual stimuli can promote binding of cross--modal features to enable the 38 formation of an auditory--visual (AV) object (Bizley et al, 2016b).…”
Section: Analysis 29mentioning
confidence: 99%
“…properties of a visual stimulus can be exploited to detect correspondence between auditory and 33 visual streams (Crosse et al, 2015;Denison et al, 2013;Rahne et al, 2008), can bias the perceptual 34 organisation of a sound scene (Brosch et al, 2015), and can enhance or impair listening performance 35 depending on whether the visual stimulus is temporally coherent with a target or distractor sound 36 stream (Maddox et al, 2015). Together, these behavioural results suggest that temporal coherence 37 between auditory and visual stimuli can promote binding of cross--modal features to enable the 38 formation of an auditory--visual (AV) object (Bizley et al, 2016b).…”
Section: Analysis 29mentioning
confidence: 99%
“…For instance, the distribution of attention in space is guided by information from different sensory modalities as shown by cross-modal and multisensory cueing studies (e.g., Spence & Driver, 2004). Most research on crossmodal interactions in attention orienting has typically employed the manipulation of spatial (Spence & Driver, 1994;Driver & Spence, 1998;McDonald, Teder-Salejarvi, & Hillyard, 2000) and temporal (Busse et al, 2005; Van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008;Van den Brink, Cohen, van der Burg, Talsma, Vissers, & Slagter, 2014;Maddox, Atilgan, Bizley, & Lee, 2015) congruence between stimuli across modalities. However, recent studies have highlighted that in real world scenarios, multisensory inputs do not only convey temporal and spatial congruence, but also bear semantic relationships.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, sound elements that are near each other in frequency and close together in time tend to be perceived as coming from the same source. When sound elements are comodulated with relatively slow modulations below about 7 Hz (turning on and off together or changing amplitude together), they tend to be grouped together perceptually (see examples in Fujisaki & Nishida, 2005;Hall & Grose, 1990;Maddox, Atilgan, Bizley, & Lee, 2015;Oxenham & Dau, 2001). Indeed, in typical English speech, syllabic rates are in this range, typically below 10 Hz (Greenberg, Carvey, Hitchcock, & Chang, 2003).…”
Section: Auditory Selective Attention Depends On Auditory Object Formmentioning
confidence: 99%