We investigated whether lateral masking in the near-periphery, due to inhibitory lateral interactions at an early level of central visual processing, could be weakened by perceptual learning and whether learning transferred to an untrained, higher-level lateral masking known as crowding. The trained task was contrast detection of a Gabor target presented in the near periphery (4°) in the presence of co-oriented and co-aligned high contrast Gabor flankers, which featured different target-to-flankers separations along the vertical axis that varied from 2λ to 8λ. We found both suppressive and facilitatory lateral interactions at target-to-flankers distances (2λ - 4λ and 8λ, respectively) that were larger than those found in the fovea. Training reduces suppression but does not increase facilitation. Most importantly, we found that learning reduces crowding and improves contrast sensitivity, but has no effect on visual acuity (VA). These results suggest a different pattern of connectivity in the periphery with respect to the fovea as well as a different modulation of this connectivity via perceptual learning that not only reduces low-level lateral masking but also reduces crowding. These results have important implications for the rehabilitation of low-vision patients who must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation, which normal sighted subjects perform in the fovea.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
The last quarter of a century has seen a dramatic rise of interest in the development of technological solutions for visually impaired people. However, despite the presence of many devices, user acceptance is low. Not only are visually impaired adults not using these devices but they are also too complex for children. The majority of these devices have been developed without considering either the brain mechanisms underlying the deficit or the natural ability of the brain to process information. Most of them use complex feedback systems and overwhelm sensory, attentional and memory capacities. Here we review the neuroscientific studies on orientation and mobility in visually impaired adults and children and present the technological devices developed so far to improve locomotion skills. We also discuss how we think these solutions could be improved. We hope that this paper may be of interest to neuroscientists and technologists and it will provide a common background to develop new science-driven technology, more accepted by visually impaired adults and suitable for children with visual disabilities.
There is strong evidence of shared neurophysiological substrates for visual and vestibular processing that likely support our capacity for estimating our own movement through the environment. We examined behavioral consequences of these shared substrates in the form of crossmodal aftereffects. In particular, we examined whether sustained exposure to a visual self-motion stimulus (i.e., optic flow) induces a subsequent bias in nonvisual (i.e., vestibular) self-motion perception in the opposite direction in darkness. Although several previous studies have investigated self-motion aftereffects, none have demonstrated crossmodal transfer, which is the strongest proof that the adapted mechanisms are generalized for self-motion processing. The crossmodal aftereffect was quantified using a motion-nulling procedure in which observers were physically translated on a motion platform to find the movement required to cancel the visually induced aftereffect. Crossmodal transfer was elicited only with the longest-duration visual adaptor (15 s), suggesting that transfer requires sustained vection (i.e., visually induced self-motion perception). Visual-only aftereffects were also measured, but the magnitudes of visual-only and crossmodal aftereffects were not correlated, indicating distinct underlying mechanisms. We propose that crossmodal aftereffects can be understood as an example of contingent or contextual adaptation that arises in response to correlations across signals and functions to reduce these correlations in order to increase coding efficiency. According to this view, crossmodal aftereffects in general (e.g., visual-auditory or visual-tactile) can be explained as accidental manifestations of mechanisms that constantly function to calibrate sensory modalities with each other as well as with the environment.
Spatial memory is a multimodal representation of the environment, which can be mediated by different sensory signals. Here we investigate how the auditory modality influences memorization, contributing to the mental representation of a scene. We designed an audio test inspired by a validated spatial memory test, the Corsi-Block test for blind individuals. The test was carried out in two different conditions, with non-semantic and semantic stimuli, presented in different sessions and displaced on an audio-tactile device. Furthermore, the semantic sounds were spatially displaced in order to reproduce an audio scene, explored by participants during the test. Thus, we verified if semantic rather than non-semantic sounds are better recalled and whether exposure to an auditory scene can enhance memorization skills. Our results show that sighted subjects performed better than blind participants after the exploration of the semantic scene. This suggests that blind participants focus on the perceived sound positions and do not use items’ locations learned during the exploration. We discuss these results in terms of the role of visual experience on spatial memorization skills and the ability to take advantage of semantic information stored in the memory.
A growing amount of evidence suggests that viewing a photograph depicting motion activates the same direction-selective neurons involved in the perception of real motion. It has been shown that prolonged exposure (adaptation) to photographs depicting directional motion can induce motion adaptation and consequently motion aftereffect. The present study investigated whether adapting to photographs depicting humans, animals, and vehicles that move leftward or rightward also generates a positional aftereffect (the motion-induced position shift--MIPS), in which the perceived spatial position of a target pattern is shifted in the opposite direction to that of adaptation. Results showed that adapting to still photographs depicting objects that move in a particular direction shifts the perceived position of subsequently presented stationary objects opposite to the depicted adaptation direction and that this effect depends on the retinotopic location of the adapting stimulus. These results suggest that the implied motion could activate the same direction-selective and speed-tuned mechanisms that produce positional aftereffect when viewing real motion.
The use of virtual environments in functional imaging experiments is a promising method to investigate and understand the neural basis of human navigation and self-motion perception. However, the supine position in the fMRI scanner is unnatural for everyday motion. In particular, the head-horizontal self-motion plane is parallel rather than perpendicular to gravity. Earlier studies have shown that perception of heading from visual self-motion stimuli, such as optic flow, can be modified due to visuo-vestibular interactions. With this study, we aimed to identify the effects of the supine body position on visual heading estimation, which is a basic component of human navigation. Visual and vestibular heading judgments were measured separately in 11 healthy subjects in upright and supine body positions. We measured two planes of self-motion, the transverse and the coronal plane, and found that, although vestibular heading perception was strongly modified in a supine position, visual performance, in particular for the preferred head-horizontal (i.e., transverse) plane, did not change. This provides behavioral evidence in humans that direction estimation from self-motion consistent optic flow is not modified by supine body orientation, demonstrating that visual heading estimation is one component of human navigation that is not influenced by the supine body position required for functional brain imaging experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.