The perception of simultaneity between auditory and visual information is of crucial importance for maintaining a coordinated representation of a multisensory event. Here we show that the perceptual system is able to adaptively recalibrate itself to audio-visual temporal asynchronies. Participants were exposed to a train of sounds and light flashes with a constant time lag ranging from -200 (sound first) to +200 ms (light first). Following this exposure, a temporal order judgement (TOJ) task was performed in which a sound and light were presented with a stimulus onset asynchrony (SOA) chosen from 11 values between -240 and +240 ms. Participants either judged whether the sound or the light was presented first, or whether the sound and light were presented simultaneously or successively. The point of subjective simultaneity (PSS) was, in both cases, shifted in the direction of the exposure lag, indicative of recalibration.
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.
Abstract& A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multisensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In Experiment 1, we show that this AV interaction is not speechspecific. Ecologically valid nonspeech AV events (actions performed by an actor such as handclapping) were associated with a similar speeding-up and suppression of auditory N1 amplitude as AV speech (syllables). Experiment 2 demonstrated that these AV interactions were not influenced by whether A and V were congruent or incongruent. In Experiment 3 we show that the AV interaction on N1 was absent when there was no anticipatory visual motion, indicating that the AV interaction only occurred when visual anticipatory motion preceded the sound. These results demonstrate that the visually induced speeding-up and suppression of auditory N1 amplitude reflect multisensory integrative mechanisms of AV events that crucially depend on whether vision predicts when the sound occurs. &
General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.-Users may download and print one copy of any publication from the public portal for the purpose of private study or research-You may not further distribute the material or use it for any profit-making activity or commercial gain-You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Exposure to incongruent auditory and visual speech produces both visual recalibration and selective adaptation of auditory speech identification. In an earlier study, exposure to an ambiguous auditory utterance (intermediate between /aba/ and /ada/) dubbed onto the video of a face articulating either /aba/ or /ada/, recalibrated the perceived identity of auditory targets in the direction of the visual component, while exposure to congruent non-ambiguous /aba/ or /ada/ pairs created selective adaptation, i.e. a shift of perceived identity in the opposite direction [Bertelson, P., Vroomen, J., & de Gelder, B. (2003). Visual recalibration of auditory speech identification: a McGurk aftereffect. Psychological Science, 14, 592-597]. Here, we examined the build-up course of the after-effects produced by the same two types of bimodal adapters, over a 1-256 range of presentations. The (negative) after-effects of non-ambiguous congruent adapters increased monotonically across that range, while those of ambiguous incongruent adapters followed a curvilinear course, going up and then down with increasing exposure. This pattern is discussed in terms of an asynchronous interaction between recalibration and selective adaptation processes.
Functional neuroimaging experiments have shown that recognition of emotional expressions does not depend on awareness of visual stimuli and that unseen fear stimuli can activate the amygdala via a colliculopulvinar pathway. Perception of emotional expressions in the absence of awareness in normal subjects has some similarities with the unconscious recognition of visual stimuli which is well documented in patients with striate cortex lesions (blindsight). Presumably in these patients residual vision engages alternative extra-striate routes such as the superior colliculus and pulvinar. Against this background, we conjectured that a blindsight subject (GY) might recognize facial expressions presented in his blind field. The present study now provides direct evidence for this claim.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.