Despite the many studies on the visual control of grasping, little is known about how and when small variations in shape affect grasping kinematics. In the present study we asked subjects to grasp elliptical cylinders that were placed 30 and 60 cm in front of them. The cylinders' aspect ratio was varied systematically between 0.4 and 1.6, and their orientation was varied in steps of 30 degrees. Subjects picked up all noncircular cylinders with a hand orientation that approximately coincided with one of the principal axes. The probability of selecting a given principal axis was the highest when its orientation was equal to the preferred orientation for picking up a circular cylinder at the same location. The maximum grip aperture was scaled to the length of the selected principal axis, but the maximum grip aperture was also larger when the length of the axis orthogonal to the grip axis was longer than that of the grip axis. The correlation between the grip aperture--or the hand orientation--at a given instant, and its final value, increased monotonically with the traversed distance. The final hand orientation could already be inferred from its value after 30% of the movement distance with a reliability that explains 50% of the variance. For the final grip aperture, this was only so after 80% of the movement distance. The results indicate that the perceived shape of the cylinder is used for selecting appropriate grasping locations before or early in the movement and that the grip aperture and orientation are gradually attuned to these locations during the movement.
Humans are experts in cooperating with each other when trying to accomplish tasks they cannot achieve alone. Recent studies of joint action have shown that when performing tasks together people strongly rely on the neurocognitive mechanisms that they also use when performing actions individually, that is, they predict the consequences of their co-actor's behavior through internal action simulation. Context-sensitive action monitoring and action selection processes, however, are relatively underrated but crucial ingredients of joint action. In the present paper, we try to correct the somewhat simplified view on joint action by reviewing recent studies of joint action simulation, monitoring, and selection while emphasizing the intricate interrelationships between these processes. We complement our review by defining the contours of a neurologically plausible computational framework of joint action.
The visual environment is distorted with respect to the physical environment. Luneburg [1947, Mathematical Analysis of Binocular Vision (Princeton, NJ: Princeton University Press)] assumed that visual space could be described by a Riemannian space of constant curvature. Such a space is described by a metric which defines the distance between any two points. It is uncertain, however, whether such a metric description is valid. Two experiments are reported in which subjects were asked to set two bars parallel to each other in a horizontal plane. The backdrop consisted of wrinkled black plastic sheeting, and the floor and ceiling were hidden by means of a horizontal aperture restricting the visual field of the subject vertically to 10 deg. We found that large deviations (of up to 40 degrees) occur and that the deviations are proportional to the separation angle: on average, the proportion is 30%. These deviations occur for 30 degrees, 60 degrees, 120 degrees, and 150 degrees reference orientations, but not for 0 degree and 90 degrees reference orientations; there the deviation is approximately 0 degree for most subjects. A Riemannian space of constant curvature, therefore, cannot be an adequate description. If it were, then the deviation between the orientation of the test and the reference bar would be independent of the reference orientation. Furthermore, we found that the results are independent of the distance of the bars from the subject, which suggests either that visual space has a zero mean curvature, or that the parallelity task is essentially a monocular task. The fact that the deviations vanish for a 0 degree and 90 degrees orientation is reminiscent of the oblique effect reported in the literature. However, the 'oblique effect' reported here takes place in a horizontal plane at eye height, not in a frontoparallel plane.
Classically, it has been assumed that visual space can be represented by a metric. This means that the distance between points and the angle between lines can be uniquely defined. However, this assumption has never been tested. Also, measurements outdoors, where monocular cues are abundant, conflict with this model. This paper reports on two experiments in which the structure of visual space was investigated, using an exocentric pointing task. In the first experiment, we measured the influence of the separation between pointer and target and of the orientation of the stimuli with respect to the observer. This was done both monocularly and binocularly. It was found that the deviation of the pointer settings depended linearly on the orientation, indicating that visual space is anisotropic. The deviations for configurations that were symmetrical in the median plane were approximately the same, indicating that left/right symmetry was maintained. The results for monocular and binocular conditions were very different, which indicates that stereopsis was an important cue. In both conditions, there were large deviations from the veridical. In the second experiment, the relative distance of the pointer and the target with respect to the observer was varied in both the monocular and the binocular conditions. The relative distance turned out to be the main parameter for the ranges used (1-5 m). Any distance function must have an expanding and a compressing part in order to describe the data. In the binocular case, the results were much more consistent than in the monocular case and had a smaller standard deviation. Nevertheless, the systematic mispointings remained large. It can therefore be concluded that stereopsis improves space perception but does not improve veridicality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.