Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools.
Purpose: Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. Methods: We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. Results: We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. Conclusions: The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.
Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns.
Purpose: Independent mobility is one of the most pressing problems facing people who are blind. We present the EyeCane, a new mobility aid aimed at increasing perception of environment beyond what is provided by the traditional White Cane for tasks such as distance estimation, navigation and obstacle detection. Methods:The "EyeCane" enhances the traditional White Cane by using tactile and auditory output to increase detectable distance and angles. It circumvents the technical pitfalls of other devices, such as weight, short battery life, complex interface schemes, and slow learning curve. It implements multiple beams to enables detection of obstacles at different heights, and narrow beams to provide active sensing that can potentially increase the user's spatial perception of the environment. Participants were tasked with using the EyeCane for several basic tasks with minimal training. Results: Blind and blindfolded-sighted participants were able to use the EyeCane successfully for distance estimation, simple navigation and simple obstacle detection after only several minutes of training. Conclusions: These results demonstrate the EyeCane's potential for mobility rehabilitation. The short training time is especially important since available mobility training resources are limited, not always available, and can be quite expensive and/or entail long waiting periods.
Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.
Recent evidence suggests that mesoscopic neural oscillations measured via intracranial electroencephalography exhibit spatial representations, which were previously only observed at the micro-and macroscopic level of brain organization. Specifically, theta (and gamma) oscillations correlate with movement, speed, distance, specific locations, and goal proximity to boundaries. In entorhinal cortex, they exhibit hexadirectional modulation, which is putatively linked to grid cell activity. Understanding this mesoscopic neural code is crucial because information represented by oscillatory power and phase may complement the information content at other levels of brain organization. Mesoscopic neural oscillations help bridge the gap between single-neuron and macroscopic brain signals of spatial navigation and may provide a mechanistic basis for novel biomarkers and therapeutic targets to treat diseases causing spatial disorientation.
The entorhinal cortex contains a network of grid cells that play a fundamental part in the brain’s spatial system, supporting tasks such as path integration and spatial memory. In rodents, grid cells are thought to rely on network theta oscillations, but such signals are not evident in all species, challenging our understanding of the physiological basis of the grid network. We analyzed intracranial recordings from neurosurgical patients during virtual navigation to identify oscillatory characteristics of the human entorhinal grid network. The power of entorhinal theta oscillations showed six-fold modulation according to the virtual heading during navigation, which is a hypothesized signature of grid representations. Furthermore, modulation strength correlated with spatial memory performance. These results demonstrate the connection between theta oscillations and the human entorhinal grid network and show that features of grid-like neuronal representations can be identified from population electrophysiological recordings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.