An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
This study investigates how complementary auditory feedback may affect short-term gait modifications induced by four training sessions with a robotic exoskeleton. Healthy subjects walked on a treadmill and were instructed to match a modified gait pattern derived from their natural one, while receiving assistance by the robot (kinetic guidance). The main question we wanted to answer is whether the most commonly used combination of feedback (i.e., haptic and visual) could be either enhanced by adding auditory feedback or successfully substituted with a combination of kinetic guidance and auditory feedback. Participants were randomly assigned to one of four groups, all of which received kinetic guidance. The control group received additional visual feedback, while the three experimental groups were each provided with a different modality of auditory feedback. The third experimental group also received the same visual feedback as the control group. Differences among the training modalities in gait kinematics, timing and symmetry were assessed in three post-training sessions.
BackgroundThis paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning.MethodsWe implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller) with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel) yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video), to the audio channel, in order to investigate which information was more relevant to the user.ResultsTask-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel visuomotor perturbation, whereas controller-task-related sound feedback did not. This result was particularly interesting, as the subjects relied more on auditory augmentation of the visualized target motion (which was altered with respect to arm motion by the visuomotor perturbation), rather than on sound feedback provided in the controller space, i.e., information directly related to the effective target motion of their arm.ConclusionsOur results indicate that auditory augmentation of visual feedback can be beneficial during the execution of upper limb movement exercises. In particular, we found that continuous task-related information provided through sound, in addition to visual feedback can improve not only performance but also the learning of a novel visuomotor perturbation. However, error-related information provided through sound did not improve performance and negatively affected learning in the presence of the visuomotor perturbation.
This paper studies the relationship between head-related transfer functions (HRTFs) and pinna reflection patterns in the frontal hemispace. A pre-processed database of HRTFs allows extraction of up to three spectral notches from each response taken in the median sagittal plane. Ray-tracing analysis performed on the obtained notches' central frequencies is compared with a set of possible reflection surfaces directly recognizeable from the corresponding pinna picture. Results of such analysis are discussed in terms of the reflection coefficient sign, which is found to be most likely negative. Based on this finding, a model for real-time HRTF synthesis that allows to control separately the evolution of different acoustic phenomena such as head diffraction, ear resonances, and reflections is proposed through the design of distinct filter blocks. Parameters to be fed to the model are derived either from analysis or from specific anthropometric features of the subject. Finally, objective evaluations of reconstructed HRTFs in the chosen spatial range are performed through spectral distortion measurements
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.