2010
DOI: 10.1162/jocn.2009.21308
|View full text |Cite
|
Sign up to set email alerts
|

Visual Anticipatory Information Modulates Multisensory Interactions of Artificial Audiovisual Stimuli

Abstract: The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

26
127
1
2

Year Published

2010
2010
2024
2024

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 139 publications
(159 citation statements)
references
References 54 publications
26
127
1
2
Order By: Relevance
“…Given that in AV speech, lipread input usually precedes the auditory signal (e.g., Chandrasekaran, Trubanova, Stillittano, Caplier, & Ghazanfar, 2009), Stekelenburg and Vroomen (2007) argued that the visually-induced N1 modulations arise whenever the visual input precedes the audio, thus warning the listener about when the sound is going to occur. This was corroborated by similar results obtained with artificial AV stimuli in which anticipatory visual motion reliably predicted sound onset (Vroomen & Stekelenburg, 2010). It thus appears that the temporal characteristics of an AV stimulus in which the visual component precedes the audio are responsible for the early N1 effects, irrespective of whether the stimuli are ecologically valid or artificial.…”
Section: -Introductionsupporting
confidence: 78%
See 4 more Smart Citations
“…Given that in AV speech, lipread input usually precedes the auditory signal (e.g., Chandrasekaran, Trubanova, Stillittano, Caplier, & Ghazanfar, 2009), Stekelenburg and Vroomen (2007) argued that the visually-induced N1 modulations arise whenever the visual input precedes the audio, thus warning the listener about when the sound is going to occur. This was corroborated by similar results obtained with artificial AV stimuli in which anticipatory visual motion reliably predicted sound onset (Vroomen & Stekelenburg, 2010). It thus appears that the temporal characteristics of an AV stimulus in which the visual component precedes the audio are responsible for the early N1 effects, irrespective of whether the stimuli are ecologically valid or artificial.…”
Section: -Introductionsupporting
confidence: 78%
“…In line with earlier reports, (e.g., Besle et al, 2004;Fort, Delpuech, Pernier, & Giard, 2002;Giard & Peronnet, 1999;Klucharev et al, 2003;Vroomen & Stekelenburg, 2010), this allowed us to compare the audiovisual (AV -V) with the auditory-only (A) condition and interpret any difference as an integration effect between the two modalities. In a first analysis, we compared both groups on the N1 and P2 peaks to reveal lipread-induced modulations that reflect visual prediction (N1) and possibly, a phonetic mechanism (P2).…”
Section: -Eeg Recording and Analysissupporting
confidence: 67%
See 3 more Smart Citations