2012
DOI: 10.1163/187847611x620937
|View full text |Cite
|
Sign up to set email alerts
|

The Effect of a Concurrent Working Memory Task and Temporal Offsets on the Integration of Auditory and Visual Speech Information

Abstract: Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
11
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 31 publications
(49 reference statements)
2
11
0
Order By: Relevance
“…The behavioral results showed that the McGurk effect was weaker in the Dual than Single task condition, showing an attentional effect on audiovisual speech perception, in agreement with previous results (Tiippana et al, 2004, 2011; Alsius et al, 2005, 2007; Soto-Faraco and Alsius, 2007, 2009; Andersen et al, 2009; Alsius and Soto-Faraco, 2011; Buchan and Munhall, 2011, 2012). However, note that at variance with the results of Alsius et al (2005; see also Alsius et al, 2007), the identification of visual stimuli was poorer in the Dual than Single task condition.…”
Section: Discussionsupporting
confidence: 91%
See 1 more Smart Citation
“…The behavioral results showed that the McGurk effect was weaker in the Dual than Single task condition, showing an attentional effect on audiovisual speech perception, in agreement with previous results (Tiippana et al, 2004, 2011; Alsius et al, 2005, 2007; Soto-Faraco and Alsius, 2007, 2009; Andersen et al, 2009; Alsius and Soto-Faraco, 2011; Buchan and Munhall, 2011, 2012). However, note that at variance with the results of Alsius et al (2005; see also Alsius et al, 2007), the identification of visual stimuli was poorer in the Dual than Single task condition.…”
Section: Discussionsupporting
confidence: 91%
“…Several recent studies have, however, put into question the impenetrability of audiovisual integration to attentional modulation, both in the speech (Tiippana et al, 2004, 2011; Alsius et al, 2005, 2007; Soto-Faraco and Alsius, 2007, 2009; Andersen et al, 2009; Fairhall and Macaluso, 2009; Alsius and Soto-Faraco, 2011; Buchan and Munhall, 2011, 2012) and the non-speech domains (e.g., Senkowski et al, 2005; Talsma and Woldorff, 2005; Fujisaki et al, 2006; Talsma et al, 2007). Of particular interest for the current study, Alsius et al (2005) tested to which extent audiovisual speech perception could be modulated by attentional load.…”
Section: Introductionmentioning
confidence: 99%
“…One possible explanation for this relatively low value is visual attention. Withdrawing attention from an audiovisual McGurk stimulus by directing attention to a competing auditory or visual stimulus (Alsius et al, 2005), to a somatosensory stimulus (Alsius et al, 2007), or to a concurrent working memory task (Buchan and Munhall, 2012) reduces perception of the McGurk effect. Therefore, it may not be fixation location but rather the locus of visual attention that is the key determinant of whether participants perceive the McGurk effect.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, the McGurk effect can be reduced if there is competing information in the visual modality providing distracting cues (see, e.g., Tiippana, Andersen, & Sams, 2004). It is also decreased by loading the audiovisual speech perception task at hand with a second task performed at the same time (Alsius, Navarra, Campbell, & Soto-Faraco, 2005;Buchan & Munhall, 2012). Nahorna, Berthommier, and Schwartz (2012) showed that if a McGurk target made of an audio "ba" and a video "ga" was preceded by an audiovisual incoherent context made of incompatible auditory and visual speech material (e.g., audio syllables dubbed on video sentences), then the amount of perception of the McGurk fusion "da" was largely decreased.…”
Section: Audiovisual Binding In Speech Perceptionmentioning
confidence: 99%