We present a set of stimuli representing human actions under point-light conditions, as seen from different viewpoints. The set contains 22 fairly short, well-delineated, and visually "loopable" actions. For each action, we provide movie files from five different viewpoints as well as a text file with the three spatial coordinates of the point lights, allowing researchers to construct customized versions. The full set of stimuli may be downloaded from www.psychonomic.org/archive/.
Using functional magnetic resonance imaging and point light displays portraying six different human actions, we were able to show that several visual cortical regions, including human MT/V5 complex, posterior inferior temporal gyrus and superior temporal sulcus, are differentially active in the subtraction comparing biological motion to scrambled motion. Comparison of biological motion to three-dimensional rotation (of a human figure), articulated motion and translation suggests that human superior temporal sulcus activity reflects the action portrayed in the biological motion stimuli, whereas posterior inferior temporal gyrus responds to the figure and hMT/V5+ to the complex motion pattern present in biological motion stimuli. These results were confirmed with implied action stimuli.
BackgroundIn the context of interacting activities requiring close-body contact such as fighting or dancing, the actions of one agent can be used to predict the actions of the second agent [1]. In the present study, we investigated whether interpersonal predictive coding extends to interactive activities – such as communicative interactions - in which no physical contingency is implied between the movements of the interacting individuals.Methodology/Principal FindingsParticipants observed point-light displays of two agents (A and B) performing separate actions. In the communicative condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the individual condition, agent A's communicative action was substituted with a non-communicative action. Using a simultaneous masking detection task, we demonstrate that observing the communicative gesture performed by agent A enhanced visual discrimination of agent B.Conclusions/SignificanceOur finding complements and extends previous evidence for interpersonal predictive coding, suggesting that the communicative gestures of one agent can serve as a predictor for the expected actions of the respondent, even if no physical contact between agents is implied.
After viewing an object in an implied rotation, subjects' short-term visual memory for the object's position is distorted in the direction of rotation. Previous accounts of this representational momentum effect have emphasized the analogy to physical momentum. This study provides a more general perspective: Position memory is influenced by anticipatory processes related to the future event course. In Experiment 1, subjects are presented with an implied periodical event in which a rectangle rotates back and forth. When a direction change in the implied rotation can be anticipated, memory distortion size drops back to zero. Experiment 2 rejects an alternative explanation for the findings of Experiment 1 in terms of enhanced position memory caused by repeated presentations of the memory pattern orientation within the same trial. In Experiment 3, the periods of the implied event are marked by changes in velocity rather than direction. The anticipation of a sudden velocity increase leads to a larger memory shift. We conclude that the perceptual system anticipates the event course on the basis of a representation of the higher order event structure rather than the local motion characteristics.
The perceptually bistable character of point-light walkers has been examined in three experiments. A point-light figure without explicit depth cues constitutes a perfectly ambiguous stimulus: from all viewpoints, multiple interpretations are possible concerning the depth orientation of the figure. In the first experiment, it is shown that non-lateral views of the walker are indeed interpreted in two orientations, either as facing towards the viewer or as facing away from the viewer, but that the interpretation in which the walker is oriented towards the viewer is reported more frequently. In the second experiment the point-light figure was walking backwards, making the global orientation of the point-light figure opposite to the direction of global motion. The interpretation in which the walker was facing the viewer was again reported more frequently. The robustness of these findings was examined in the final experiment, in which the effects of disambiguating the stimulus by introducing a local depth cue (occlusion) or a more global depth cue (applying perspective projection) were explored.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.