The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the "ventriloquism effect." These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participant's uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0 degrees, 5 degrees, 10 degrees, 15 degrees ) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15 degrees ) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.
Multisensory neurons and their ability to integrate multisensory cues develop gradually in the midbrain [i.e., superior colliculus (SC)]. To examine the possibility that early sensory experiences might play a critical role in these maturational processes, animals were raised in the absence of visual cues. As adults, the SC of these animals were found to contain many multisensory neurons, the large majority of which were visually responsive. Although these neurons responded robustly to each of their cross-modal inputs when presented individually, they were incapable of synthesizing this information. These observations suggest that visual experiences are critical for the SC to develop the ability to integrate multisensory information and lead to the prediction that, in the absence of such experience, animals will be compromised in their sensitivity to cross-modal events.
The ability of a visual signal to influence the localization of an auditory target (i.e., "cross-modal bias") was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multisensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially "unified" (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the midline. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.
Recent advances in electroencephalographic (EEG) acquisition allow for recordings using wet and dry sensors during whole-body motion. The large variety of commercially available EEG systems contrasts with the lack of established methods for objectively describing their performance during whole-body motion. Therefore, the aim of this study was to introduce methods for benchmarking the suitability of new EEG technologies for that context. Subjects performed an auditory oddball task using three different EEG systems (Biosemi wet—BSM, Cognionics Wet—Cwet, Conionics Dry—Cdry). Nine subjects performed the oddball task while seated and walking on a treadmill. We calculated EEG epoch rejection rate, pre-stimulus noise (PSN), signal-to-noise ratio (SNR) and EEG amplitude variance across the P300 event window (CVERP) from a subset of 12 channels common to all systems. We also calculated test-retest reliability and the subject’s level of comfort while using each system. Our results showed that using the traditional 75 μV rejection threshold BSM and Cwet epoch rejection rates are ~25% and ~47% in the seated and walking conditions respectively. However, this threshold rejects ~63% of epochs for Cdry in the seated condition and excludes 100% of epochs for the majority of subjects during walking. BSM showed predominantly no statistical differences between seated and walking condition for all metrics, whereas Cwet showed increases in PSN and CVERP, as well as reduced SNR in the walking condition. Data quality from Cdry in seated conditions were predominantly inferior in comparison to the wet systems. Test-retest reliability was mostly moderate/good for these variables, especially in seated conditions. In addition, subjects felt less discomfort and were motivated for longer recording periods while using wet EEG systems in comparison to the dry system. The proposed method was successful in identifying differences across systems that are mostly caused by motion-related artifacts and usability issues. We conclude that the extraction of the selected metrics from an auditory oddball paradigm may be used as a benchmark method for testing the performance of different EEG systems in mobile conditions. Moreover dry EEG systems may need substantial improvements to meet the quality standards of wet electrodes.
To better understand human brain dynamics during visually guided locomotion, we developed a method of removing motion artifacts from mobile electroencephalography (EEG) and studied human subjects walking and running over obstacles on a treadmill. We constructed a novel dual-layer EEG electrode system to isolate electrocortical signals, and then validated the system using an electrical head phantom and robotic motion platform. We collected data from young healthy subjects walking and running on a treadmill while they encountered unexpected obstacles to step over. Supplementary motor area and premotor cortex had spectral power increases within ~200 ms after object appearance in delta, theta, and alpha frequency bands (3–13 Hz). That activity was followed by similar posterior parietal cortex spectral power increase that decreased in lag time with increasing locomotion speed. The sequence of activation suggests that supplementary motor area and premotor cortex interrupted the gait cycle, while posterior parietal cortex tracked obstacle location for planning foot placement nearly two steps ahead of reaching the obstacle. Together, these results highlight advantages of adopting dual-layer mobile EEG, which should greatly facilitate the study of human brain dynamics in physically active real-world settings and tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.