It is known that people can learn to deal with delays between their actions and the consequences of such actions. We wondered whether they do so by adjusting their anticipations about the sensory consequences of their actions or whether they simply learn to move in certain ways when performing specific tasks. To find out, we examined details of how people learn to intercept a moving target with a cursor that follows the hand with a delay and examined the transfer of learning between this task and various other tasks that require temporal precision. Subjects readily learned to intercept the moving target with the delayed cursor. The compensation for the delay generalized across modifications of the task, so subjects did not simply learn to move in a certain way in specific circumstances. The compensation did not generalize to completely different timing tasks, so subjects did not generally expect the consequences of their motor commands to be delayed. We conclude that people specifically learn to control the delayed visual consequences of their actions to perform certain tasks.
Many actions involve limb movements toward a target. Visual and proprioceptive estimates are available online, and by optimally combining (Ernst and Banks, 2002) both modalities during the movement, the system can increase the precision of the hand estimate. The notion that both sensory modalities are integrated is also motivated by the intuition that we do not consciously perceive any discrepancy between the felt and seen hand's positions. This coherence as a result of integration does not necessarily imply realignment between the two modalities (Smeets et al., 2006). For example, the two estimates (visual and proprioceptive) might be different without either of them (e.g., proprioception) ever being adjusted after recovering the other (e.g., vision). The implication that the felt and seen positions might be different has a temporal analog. Because the actual feedback from the hand at a given instantaneous position reaches brain areas at different times for proprioception and vision (shorter for proprioception), the corresponding instantaneous unisensory position estimates will be different, with the proprioceptive one being ahead of the visual one. Based on the assumption that the system integrates optimally and online the available evidence from both senses, we introduce a temporal mechanism that explains the reported overestimation of hand positions when vision is occluded for active and passive movements (Gritsenko et al., 2007) without the need to resort to initial feedforward estimates (Wolpert et al., 1995). We set up hypotheses to test the validity of the model, and we contrast simulation-based predictions with empirical data.
It has been hypothesised that our actions are less susceptible to visual illusions than our perceptual judgements because similar information is processed for perception and action in separate pathways. We test this hypothesis for subjects intercepting a moving object that appears to move at a different speed than its true speed due to an illusion. The object was a moving Gabor patch: a sinusoidal grating of which the luminance contrast is modulated by a two-dimensional Gaussian. We manipulated the patch's apparent speed by moving the grating relative to the Gaussian. We used separate two-interval forced choice discrimination tasks to determine how moving the grating influenced ten people's judgements of the object's position and velocity while they were fixating. Based on their perceptual judgements, and knowing that our ability to correct for errors that arise from relying on incorrect judgements are limited by a sensorimotor delay of about 100 msec, we predicted the extent to which subjects would tap ahead of or behind similar targets when trying to intercept them at the fixation location. The predicted errors closely matched the actual errors that subjects made when trying to intercept the targets. This finding does not support the two visual streams hypothesis. The results are consistent with the idea that the extent to which an illusion influences an action tells us something about the extent to which the action relies on the percept in question.
In daily life we often interact with moving objects in tasks that involve analyzing visual motion, like catching a ball. To do so successfully we track objects with our gaze, using a combination of smooth pursuit and saccades. Previous work has shown that the occurrence and direction of corrective saccades leads to changes in the perceived velocity of moving objects. Here we investigate whether such changes lead to equivalent biases in interception. Participants had to track moving targets with their gaze, and in separate sessions either judge the targets’ velocities or intercept them by tapping on them. We separated trials in which target movements were tracked with pure pursuit from trials in which identical target movements were tracked with a combination of pursuit and corrective saccades. Our results show that interception errors are shifted in accordance with the observed influence of corrective saccades on velocity judgments. Furthermore, while the time at which corrective saccades occurred did not affect velocity judgments, it did influence their effect in the interception task. Corrective saccades around 100 ms before the tap had a stronger effect on the endpoint error than earlier saccades. This might explain why participants made earlier corrective saccades in the interception task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.