The visual recognition of complex movements and actions is crucial for the survival of many species. It is important not only for communication and recognition at a distance, but also for the learning of complex motor actions by imitation. Movement recognition has been studied in psychophysical, neurophysiological and imaging experiments, and several cortical areas involved in it have been identified. We use a neurophysiologically plausible and quantitative model as a tool for organizing and making sense of the experimental data, despite their growing size and complexity. We review the main experimental findings and discuss possible neural mechanisms, and show that a learning-based, feedforward model provides a neurophysiologically plausible and consistent summary of many key experimental results.
The rich and immediate perception of a familiar face, including its identity, expression and even intent, is one of the most impressive shared faculties of human and non-human primate brains. Many visually responsive neurons in the inferotemporal cortex of macaque monkeys respond selectively to faces, sometimes to only one or a few individuals, while showing little sensitivity to scale and other details of the retinal image. Here we show that face-responsive neurons in the macaque monkey anterior inferotemporal cortex are tuned to a fundamental dimension of face perception. Using a norm-based caricaturization framework previously developed for human psychophysics, we varied the identity information present in photo-realistic human faces, and found that neurons of the anterior inferotemporal cortex were most often tuned around the average, identity-ambiguous face. These observations are consistent with face-selective responses in this area being shaped by a figural comparison, reflecting structural differences between an incoming face and an internal reference or norm. As such, these findings link the tuning of neurons in the inferotemporal cortex to psychological models of face identity perception.
Human observers readily recognize emotions expressed in body movement. Their perceptual judgments are based on simple movement features, such as overall speed, but also on more intricate posture and dynamic cues. The systematic analysis of such features is complicated due to the difficulty of considering the large number of potentially relevant kinematic and dynamic parameters. To identify emotion-specific features we motion-captured the neutral and emotionally expressive (anger, happiness, sadness, fear) gaits of 25 individuals. Body posture was characterized by average flexion angles, and a low-dimensional parameterization of the spatio-temporal structure of joint trajectories was obtained by approximation with a nonlinear mixture model. Applying sparse regression, we extracted critical emotion-specific posture and movement features, which typically depended only on a small number of joints. The features we extracted from the motor behavior closely resembled features that were critical for the perception of emotion from gait, determined by a statistical analysis of classification and rating judgments of 21 observers presented with avatars animated with the recorded movements. The perceptual relevance of these features was further supported by another experiment showing that artificial walkers containing only the critical features induced high-level after-effects matching those induced by adaptation with natural emotional walkers.
Experimental evidence suggests a link between perception and the execution of actions . In particular, it has been proposed that motor programs might directly influence visual action perception . According to this hypothesis, the acquisition of novel motor behaviors should improve their visual recognition, even in the absence of visual learning. We tested this prediction by using a new experimental paradigm that dissociates visual and motor learning during the acquisition of novel motor patterns. The visual recognition of gait patterns from point-light stimuli was assessed before and after nonvisual motor training. During this training, subjects were blindfolded and learned a novel coordinated upper-body movement based only on verbal and haptic feedback. The learned movement matched one of the visual test patterns. Despite the absence of visual stimulation during training, we observed a selective improvement of the visual recognition performance for the learned movement. Furthermore, visual recognition performance after training correlated strongly with the accuracy of the execution of the learned motor pattern. These results prove, for the first time, that motor learning has a direct and highly selective influence on visual action recognition that is not mediated by visual learning.
This study provides Class III evidence that coordinative training improves motor performance and reduces ataxia symptoms in patients with progressive cerebellar ataxia.
Converging experimental evidence indicates that mirror neurons in the monkey premotor area F5 encode the goals of observed motor acts [1-3]. However, it is unknown whether they also contribute to encoding the perspective from which the motor acts of others are seen. In order to address this issue, we recorded the visual responses of mirror neurons of monkey area F5 by using a novel experimental paradigm based on the presentation of movies showing grasping motor acts from different visual perspectives. We found that the majority of the tested mirror neurons (74%) exhibited view-dependent activity with responses tuned to specific points of view. A minority of the tested mirror neurons (26%) exhibited view-independent responses. We conclude that view-independent mirror neurons encode action goals irrespective of the details of the observed motor acts, whereas the view-dependent ones might either form an intermediate step in the formation of view independence or contribute to a modulation of view-dependent representations in higher-level visual areas, potentially linking the goals of observed motor acts with their pictorial aspects.
This study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.
Evidence has accumulated for a mirror system in humans which simulates actions of conspecifics (Wilson amp; Knoblich, 2005). One likely purpose of such a simulation system is to support action prediction. We focused on the time-course of action prediction, investigating whether the prediction of actions involves a real-time simulation process. We motion-captured a number of human actions and rendered them as point light action sequences. In the experiments, we presented brief videos of human actions, followed by an occluder and a static test stimulus. Both the occluder duration (SOA of 100, 400, or 700 ms) and the distance of the test stimulus to the endpoint of the action sequence (corresponding to 100, 400, or 700 ms) were varied independently. Subjects had to judge whether the test stimulus depicted a continuation of the action in the same orientation, or whether the test stimulus was presented in a different orientation in depth as the previous action sequence. Prediction accuracy was best when SOA and distance to the endpoint corresponded, i.e. when the test image was a continuation of the sequence that matched the occluder duration. This pattern of results was destroyed when the sequences and test images were inverted (flipped around the horizontal axis). In this case, performance simply deteriorated with increasing distance to the end of the sequence. Overall, our findings suggest that action prediction involves a real-time simulation process. This process can break down when the actions are presented under viewing conditions for which we have little experience
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.