A vivid perception of the moving form of a human figure can be obtained from a few moving light points on the joints of the body. This is known as biological motion perception. It is commonly believed that the perception of biological motion rests on image motion signals. Curiously, however, some patients with lesions to motion processing areas of the dorsal stream are severely impaired in image motion perception but can easily perceive biological motion. Here we describe a biological motion stimulus based on a limited lifetime technique that tests the perception of a moving human figure in the absence of local image motion. We find that subjects can spontaneously recognize a moving human figure in displays without local image motion. Their performance is very similar to that for classic point-light displays. We also find that tasks involving the discrimination of walking direction or the coherence of a walking figure can be performed in the absence of image motion. Thus, although image motion may generally aid processes such as segmenting figure from background, we propose that it is not the basis for the percept of biological motion. Rather, we suggest biological motion is derived from dynamic form information on body posture evolving over time.T he phenomenon of biological motion perception was demonstrated by Johannson (1). He showed that a dozen moving light points, attached to the joints of the body, suffices to create a rich perception of a moving human figure. Biological motion is a highly complex motion pattern and an extreme example of the sophistication of pattern analysis in the brain. It is also interesting because it links the perception of motion with the perception of form, two qualities that involve largely different cortical processing streams (2). Form analysis is carried out in the ventral stream whereas image motion is processed in areas of the dorsal stream. Selectivity for biological motion has been found in the superior temporal polysensory area (3, 4), which receives input from both processing streams. Patients with lesions to motion processing areas of the dorsal stream are severely impaired in image motion perception but can easily perceive biological motion (5, 6). This finding could suggest that biological motion perception does not rely on the analysis of image motion signals. Here, we test this hypothesis by using a stimulus in which we manipulate the amount of local image motion that is consistent with movement of the limbs.Standard biological motion stimuli ( Fig. 1a and Movie 1, which is available as supporting information on the PNAS web site, www.pnas.org) consist of a frame animation of the motion of light points attached to the joints of a moving human figure (7). The moving pattern of dots in this case contains information about the position of points on the body and about the motion of these points over time. The motion signal is carried by the apparent image motion of each individual point in two successive animation frames. The stimuli we created dissociate these two sou...
Eye or head rotation would influence perceived heading direction if it were coded by cells tuned only to retinal flow patterns that correspond to linear self-movement. We propose a model for heading detection based on motion templates that are also Gaussian-tuned to the amount of rotational flow. Such retinal flow templates allow explicit use of extra-retinal signals to create templates tuned to head-centric flow as seen by the stationary eye. Our model predicts an intermediate layer of 'eye velocity gain fields' in which 'rate-coded' eye velocity is multiplied with responses of templates sensitive to specific retinal flow patterns. By combination of the activities of one retinal flow template and many units with an eye velocity gain field, a new type of unit appears: its preferred retinal flow changes dynamically in accordance with the eye rotation velocity. This unit's activity becomes thereby approximately invariant to the amount of eye rotation. The units with eye velocity gain fields from the motion-analogue of the units with eye position gain fields found in area 7a, which according to our general approach, are needed to transform position from retino-centric to head-centric coordinates. The rotation-tuned templates can also provide rate-coded visual estimates of eye rotation to allow a pure visual compensation for rotational flow. Our model is consistent with psychophysical data that indicate a role for extra-retinal as well as visual rotation signals in the correct perception of heading.
When we move along we frequently look around. How quickly and accurately can we gaze in the direction of heading? We studied the temporal aspects of heading perception in expanding and contracting patterns simulating self-motion. Center of flow (CF) eccentricity was 15 degrees. Subjects had to indicate the CF by making a saccade to it. A temporal constraint on the response time was introduced, because stimuli were presented briefly (1 s). On average, subjects needed two saccades to find the CF. Initial saccades covered about 50-60% of the distance between the fixation point and the CF. Subjects underestimated the eccentricity of the CF. The systematic radial error ranged from -2.4 degrees to -4.9 degrees. The systematic tangential error was small (about 0.5 degree). The variable radial error ranged from 2.7 degrees to 4.6 degrees. We found a relation between saccade onset time and saccade endpoint error. Saccade endpoint error decreased with increasing saccade onset time, suggesting that saccades were often fired before the heading processing had been completed. From the saccade onset times, saccade endpoint errors and an estimate for the saccadic dead time (interval prior to the saccade during which modification is impossible 70 ms), we estimated the heading processing time (HPT 0.43 s). In three out of four subjects, HPT was longer for trials simulating backward movement than for trials simulating forward movement. For each saccade we determined whether it reduced the distance error. The second saccade reduced the error more effectively per time unit than the initial saccade. On the basis of this finding, we suggest that visual processing that occurs during the saccadic dead time of the first saccade is used in the preparation of the second saccade.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.