Many studies demonstrated a higher accuracy in perception and action when using more than one sense. The maximum-likelihood estimation (MLE) model offers a recent approach on how perceptual information is integrated across different sensory modalities suggesting statistically optimal integration. The purpose of the present study was to investigate how visual and proprioceptive movement information is integrated for the perception of trajectory geometry. To test this, participants sat in front of an apparatus that moved a handle along a horizontal plane. Participants had to decide whether two consecutive trajectories formed an acute or an obtuse movement path. Judgments had to be based on information from a single modality alone, i.e., vision or proprioception, or on the combined information of both modalities. We estimated both the bias and variance for each single modality condition and predicted these parameters for the bimodal condition using the MLE model. Consistent with previous findings, variability decreased for perceptual judgments about trajectory geometry based on combined visual-proprioceptive information. Furthermore, the observed bimodal data corresponded well to the predicted parameters. Our results suggest that visual and proprioceptive movement information for the perception of trajectory geometry is integrated in a statistically optimal manner.
Direction of gaze (eye angle ϩ head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.
Many studies provide evidence that information from different modalities is integrated following the maximum likelihood estimation model (MLE). For instance, we recently found that visual and proprioceptive path trajectories are optimally combined (Reuschel et al. in Exp Brain Res 201:853-862, 2010). However, other studies have failed to reveal optimal integration of such dynamic information. In the present study, we aim to generalize our previous findings to different parts of the workspace (central, ipsilateral, or contralateral) and to different types of judgments (relative vs. absolute). Participants made relative judgments by judging whether an angular path was acute or obtuse, or they made absolute judgments by judging whether a one-segmented straight path was directed to left or right. Trajectories were presented in the visual, proprioceptive, or combined visual-proprioceptive modality. We measured the bias and the variance of these estimates and predicted both parameters using the MLE. In accordance with the MLE model, participants linearly combined and weighted the unimodal angular path information by their reliabilities irrespective of the side of workspace. However, the precision of bimodal estimates was not greater than that for unimodal estimates, which is inconsistent with the MLE. For the absolute judgment task, participants' estimates were highly accurate and did not differ across modalities. Thus, we were unable to test whether the bimodal percept resulted as a weighted average of the visual and proprioceptive input. Additionally, participants were not more precise in the bimodal compared with the unimodal conditions, which is inconsistent with the MLE. Current findings suggest that optimal integration of visual and proprioceptive information of path trajectory only applies in some conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.