As we age, there is a wide range of changes in motor, sensory, cognitive, and temporal processing due to alterations in the functioning of the central nervous and musculoskeletal systems. Specifically, aging is associated with degradations in gait; altered processing of the individual sensory systems; modifications in executive control, memory, and attention; and changes in temporal processing. These age-related alterations are often inter-related and have been suggested to result from shared neural substrates. Additionally, the overlap between these brain areas and those controlling walking raises the possibility of facilitating performance in several tasks by introducing protocols that can efficiently target all four domains. Attempts to counteract these negative effects of normal aging have been focusing on research to prevent falls and/or enhance cognitive processes, while ignoring the potential multisensory benefits accompanying old age. Research shows that the aging brain tends to increasingly rely on multisensory integration to compensate for degradations in individual sensory systems and for altered neural functioning. This review covers the age-related changes in the above-mentioned domains and the potential to exploit the benefits associated with multisensory integration in aging so as to improve one's mobility and enhance sensory, cognitive, and temporal processing.
Despite appearing automatic and effortless, perceiving the visual world is a highly complex process that depends on intact visual and oculomotor function. Understanding the mechanisms underlying spatial updating (i.e., gaze contingency) represents an important, yet unresolved issue in the fields of visual perception and cognitive neuroscience. Many questions regarding the processes involved in updating visual information as a function of the movements of the eyes are still open for research. Beyond its importance for basic research, gaze contingency represents a challenge for visual prosthetics as well. While most artificial vision studies acknowledge its importance in providing accurate visual percepts to the blind implanted patients, the majority of the current devices do not compensate for gaze position. To-date, artificial percepts to the blind population have been provided either by intraocular light-sensing circuitry or by using external cameras. While the former commonly accounts for gaze shifts, the latter requires the use of eye-tracking or similar technology in order to deliver percepts based on gaze position. Inspired by the need to overcome the hurdle of gaze contingency in artificial vision, we aim to provide a thorough overview of the research addressing the neural underpinnings of eye compensation, as well as its relevance in visual prosthetics. The present review outlines what is currently known about the mechanisms underlying spatial updating and reviews the attempts of current visual prosthetic devices to overcome the hurdle of gaze contingency. We discuss the limitations of the current devices and highlight the need to use eye-tracking methodology in order to introduce gaze-contingent information to visual prosthetics.
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.
The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.
Actions modulate sensory processing by attenuating responses to self-compared to externally-generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems.Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli, however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of selfgeneration and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motorauditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before.We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performancebut no differences in memory biasand attenuated responses and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds.Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.