Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Multisensory object-recognition processes were investigated by examining the combined influence of visual and auditory inputs upon object identification--in this case, pictures and vocalizations of animals. Behaviorally, subjects were significantly faster and more accurate at identifying targets when the picture and vocalization were matched (i.e. from the same animal), than when the target was represented in only one sensory modality. This behavioral enhancement was accompanied by a modulation of the evoked potential in the latency range and general topographic region of the visual evoked N1 component, which is associated with early feature processing in the ventral visual stream. High-density topographic mapping and dipole modeling of this multisensory effect were consistent with generators in lateral occipito-temporal cortices, suggesting that auditory inputs were modulating processing in regions of the lateral occipital cortices. Both the timing and scalp topography of this modulation suggests that there are multisensory effects during what is considered to be a relatively early stage of visual object-recognition processes, and that this modulation occurs in regions of the visual system that have traditionally been held to be unisensory processing areas. Multisensory inputs also modulated the visual 'selection-negativity', an attention dependent component of the evoked potential this is usually evoked when subjects selectively attend to a particular feature of a visual stimulus.
Event-related fMRI is a powerful tool for localising psychological functions to specific brain areas. However, the number of events required to produce stable activation maps is a poorly investigated and understood problem. Huettel and McCarthy [Huettel, S.A., McCarthy, G., 2001. The effects of single-trial averaging upon the spatial extent of fMRI activation. NeuroReport 12, 2411 -2416] have shown that the spatial extent of activation increases monotonically with the number of events in an analysis. In the present paper, this result is replicated and shown to be a consequence of the cross-correlation technique used to determine active voxels and does not hold, for example, for a GLM analysis. Another analysis technique, that does not depend on goodness-of-fit to the data, is also proposed. This technique calculates an impulse response function (IRF) for each voxel, finds the best fitting haemodynamic shape to the IRF and returns an area-under-the-curve (%AUC) activation measure. Using spatial extent as a measure, asymptotic behaviour is evident after as few as 25 events for the %AUC analysis technique in a finger-tapping task with non-overlapping haemodynamic responses and for both the GLM and %AUC techniques in a similar task that allows responses to overlap. The experimental validity of the %AUC technique to identify active brain regions while minimising false positive levels is demonstrated in a group study with 25 participants. D
Electrophysiological studies have revealed a pre-attentive change-detection system in the auditory modality. This system emits a signal termed the mismatch negativity (MMN) when any detectable change in a regular pattern of auditory stimulation occurs. The precise intracranial sources underlying MMN generation, and in particular whether these vary as a function of the acoustic feature that changes, is a matter of some debate. Using functional magnetic resonance imaging, we show that anatomically distinct networks of auditory cortices are activated as a function of the deviating acoustic feature--in this case, tone frequency and tone duration--strongly supporting the hypothesis that MMN generators in auditory cortex are feature dependent. We also detail regions of the frontal and parietal cortices activated by change-detection processes. These regions also show feature dependence and we hypothesize that they reflect recruitment of attention-switching mechanisms.
Successful integration of auditory and visual inputs is crucial for both basic perceptual functions and for higher-order processes related to social cognition. Autism spectrum disorders (ASD) are characterized by impairments in social cognition and are associated with abnormalities in sensory and perceptual processes. Several groups have reported that individuals with ASD are impaired in their ability to integrate socially relevant audiovisual (AV) information, and it has been suggested that this contributes to the higher-order social and cognitive deficits observed in ASD. However, successful integration of auditory and visual inputs also influences detection and perception of nonsocial stimuli, and integration deficits may impair earlier stages of information processing, with cascading downstream effects. To assess the integrity of basic AV integration, we recorded high-density electrophysiology from a cohort of high-functioning children with ASD (7-16 years) while they performed a simple AV reaction time task. Children with ASD showed considerably less behavioral facilitation to multisensory inputs, deficits that were paralleled by less effective neural integration. Evidence for processing differences relative to typically developing children was seen as early as 100 ms poststimulation, and topographic analysis suggested that children with ASD relied on different cortical networks during this early multisensory processing stage.
Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5-12 year olds), but were fully ameliorated in ASD children entering adolescence (13-15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children.
The integration of multisensory information is essential to forming meaningful representations of the environment. Adults benefit from related multisensory stimuli but the extent to which the ability to optimally integrate multisensory inputs for functional purposes is present in children has not been extensively examined. Using a cross-sectional approach, high-density electrical mapping of event-related potentials (ERPs) was combined with behavioral measures to characterize neurodevelopmental changes in basic audiovisual (AV) integration from middle childhood through early adulthood. The data indicated a gradual fine-tuning of multisensory facilitation of performance on an AV simple reaction time task (as indexed by race model violation), which reaches mature levels by about 14 years of age. They also revealed a systematic relationship between age and the brain processes underlying multisensory integration (MSI) in the time frame of the auditory N1 ERP component (∼ 120 ms). A significant positive correlation between behavioral and neurophysiological measures of MSI suggested that the underlying brain processes contributed to the fine-tuning of multisensory facilitation of behavior that was observed over middle childhood. These findings are consistent with protracted plasticity in a dynamic system and provide a starting point from which future studies can begin to examine the developmental course of multisensory processing in clinical populations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.