Binocular vision is widely recognized as the most reliable source of 3D information within the peripersonal space, where grasping takes place. Since grasping is normally successful, it is often assumed that stereovision for action is accurate. This claim contradicts psychophysical studies showing that observers cannot estimate the 3D properties of an object veridically from binocular information. In two experiments, we compared a front-to-back grasp with a perceptual depth estimation task and found that in both conditions participants consistently relied on the same distorted 3D representation. The subjects experienced (a) compression of egocentric distances: objects looked closer to each other along the z-axis than they were, and (b) underconstancy of relative depth: closer objects looked deeper than farther objects. These biases, which stem from the same mechanism, varied in magnitude across observers, but they equally affected the perceptual and grasping task of each subject. In a third experiment, we found that the visuomotor system compensates for these systematic errors, which are present at planning, through online corrections allowed by visual and haptic feedback of the hand. Furthermore, we hypothesized that the two phenomena would give rise to estimates of the same depth interval that are geometrically inconsistent. Indeed, in a fourth experiment, we show that the landing positions of the grasping digits differ systematically depending on whether they result from absolute distance estimates or relative depth estimates, even when the targeted spatial locations are identical.
Recent results have shown that effects of pictorial illusions in grasping may decrease over the course of an experiment. This can be explained as an effect of sensorimotor learning if we consider a pictorial size illusion as simply a perturbation of visually perceived size. However, some studies have reported very constant illusion effects over trials. In the present paper, we apply an error-correction model of adaptation to experimental data of N=40 participants grasping the Müller-Lyer illusion. Specifically, participants grasped targets embedded in incremental and decremental Müller-Lyer illusion displays in (1) the same block in pseudo-randomised order, and (2) separate blocks of only one type of illusion each. Consistent with predictions of our model, we found an effect of interference between the two types when they were presented intermixed, explaining why adaptation rates may vary depending on the experimental design. We also systematically varied the number of object sizes per block, which turned out to have no effect on the rate of adaptation. This was also in accordance with our model. We discuss implications for the illusion literature, and lay out how error-correction models can explain perception-action dissociations in some, but not all grasping-of-illusion paradigms in a parsimonious and plausible way, without assuming different illusion effects.
Grasping critically depends on stereo information. We previously found that binocular disparities yield a distorted visual space, in which objects close to the observer are grasped and perceived as if they were more elongated than farther objects. Such lack of shape constancy results from the inaccurate estimate of the viewing distance, which affects the estimated depth-to-width ratio of an object. This is because (1) depth from binocular disparities scales with the square of the distance and (2) width from retinal size scales linearly with distance. Conversely, depth from monocular cues (i.e., motion and texture gradients) scales linearly with distance, hence the overall shape from these signals should not be affected by errors in egocentric estimates of object location. We therefore reasoned that adding these cues to stereo information should improve shape constancy. Contrary to expectations, in four experiments we found that stereo-texture and stereo-motion stimuli appeared even more distorted than stereo stimuli. More remarkably, results revealed that grasping execution showed identical biases, which were corrected only late in the movement through online control mechanisms, but only if both grasping digits could be visually guided on their respective contact locations. On the contrary, when the index was occluded by the object, biases in shape estimation continued to affect grasping execution until movement completion. Moreover, while the initial part of the grasp showed evidence of collision avoidance, a control experiment suggested that the above biases could have emerged as early as at movement planning, consistent with previous evidence.
Video games present a unique opportunity to study motor skill. First-person shooter (FPS) games have particular utility because they require visually-guided hand movements that are similar to widely studied planar reaching tasks. However, there is a need to ensure the tasks are equivalent if FPS games are to yield their potential as a powerful scientific tool for investigating sensorimotor control. Specifically, research is needed to ensure that differences in visual feedback of a movement do not affect motor learning between the two contexts. In traditional tasks, a movement will translate a cursor across a static background, whereas FPS games use movements to pan and tilt the view of the environment. To this end, we designed an online experiment where participants used their mouse or trackpad to shoot targets in both contexts. Kinematic analysis showed player movements were nearly identical between conditions, with highly correlated spatial and temporal metrics. This similarity suggests a shared internal model based on comparing predicted and observed displacement vectors, rather than primary sensory feedback. A second experiment, modelled on FPS-style aim-trainer games, found movements exhibited classic invariant features described within the sensorimotor literature. We found that two measures of mouse control, the mean and variability in distance of the primary sub-movement, were key predictors of overall task success. More broadly, these results show that FPS games offer a novel, engaging, and compelling environment to study sensorimotor skill, providing the same precise kinematic metrics as traditional planar reaching tasks.
Do illusory distortions of perceived object size influence how wide the hand is opened during a grasping movement? Many studies on this question have reported illusion-resistant grasping, but this finding has been contradicted by other studies showing that grasping movements and perceptual judgments are equally susceptible. One largely unexplored explanation for these contradictions is that illusion effects on grasping can be reduced with repeated movements. Using a visuomotor adaptation paradigm, we investigated whether an adaptation model could predict the time course of Ponzo illusion effects on grasping. Participants performed a series of trials in which they viewed a thin wooden target, manually reported an estimate of the target's length, then reached to grasp the target. Manual size estimates (MSEs) were clearly biased by the illusion, but maximum grip apertures (MGAs) of grasping movements were consistently accurate. Illusion-resistant MGAs were observed immediately upon presentation of the illusion, so there was no decrement in susceptibility for the adaptation model to explain. To determine whether online corrections based on visual feedback could have produced illusion-resistant MGAs, we performed an exploratory post hoc analysis of movement trajectories. Early portions of the illusion effect profile evolved as if they were biased by the illusion to the same magnitude as the perceptual responses (MSEs), but this bias was attenuated prior to the MGA. Overall, this preregistered study demonstrated that visuomotor adaptation of grasping is not the primary source of illusion resistance in closed-loop grasping.
Motor learning in visuomotor adaptation tasks results from both explicit and implicit processes, each responding differently to an error signal. While the motor output side of these processes is extensively studied, their visual input side is relatively unknown. We investigated if and how depth perception affects the computation of error information by explicit and implicit motor learning. Two groups of participants threw virtual darts at a virtual dartboard while receiving perturbed endpoint feedback. The Delayed group was allowed to re-aim and their feedback was delayed to emphasize explicit learning, while the Clamped group received clamped cursor feedback which they were told to ignore, and continued to aim straight at the target to emphasize implicit adaptation. Both groups played this game in a highly detailed virtual environment (Depth condition) and in an empty environment (No-Depth condition). The Delayed group showed an increase in error sensitivity under Depth relative to No-Depth conditions. In contrast, the Clamped group adapted to the same degree under both conditions. The movement kinematics of the Delayed participants also changed under the Depth condition, consistent with the target appearing more distant, unlike the Clamped group. A comparison of the Delayed behavioral data with a perceptual task from the same individuals showed that the effect of the Depth condition on the re-aiming direction was consistent with an increase in the scaling of the error distance and size. These findings suggest that explicit and implicit learning processes may rely on different sources of perceptual information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.