To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.
The purpose of this study was to determine the precision of proprioceptive localization of the hand in humans. We derived spatial probability distributions which describe the precision of localization on the basis of three different sources of information: proprioceptive information about the left hand, proprioceptive information about the right hand, and visual information. In the experiment subjects were seated at a table and had to perform three different position-matching tasks. In each task, the position of a target and the position of an indicator were available in a different combination of two of these three sources of information. From the spatial distributions of indicated positions in these three conditions, we derived spatial probability distributions for proprioceptive localization of the two hands and for visual localization. For proprioception we found that localization in the radial direction with respect to the shoulder is more precise than localization in the azimuthal direction. The distributions for proprioceptive localization also suggest that hand positions closer to the shoulder are localized more precisely than positions further away. These patterns can be understood from the geometry of the arm. In addition, the variability in the indicated positions suggests that the shoulder and elbow angles are known to the central nervous system with a precision of 0.6-1.1 degrees. This is a considerably better precision than the values reported in studies on perception of these angles. This implies that joint angles, or quantities equivalent to them, are represented in the central nervous system more precisely than they are consciously perceived. For visual localization we found that localization in the azimuthal direction with respect to the cyclopean eye is more precise than localization in the radial direction. The precision of the perception of visual direction is of the order of 0.2-0.6 degrees.
To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.
We have investigated how visual information of a scene, moving along the line of sight of a subject, affects postural readjustments made by a subject when instructed to maintain an upright posture. Two different types of stimulus patterns were presented each inducing a different optic flow field. In one case an optic flow field was induced by simulating motion of a subject relative to a wall and in the second case by stimulating motion of a subject through a tunnel. In both cases clear effects on postural balance were observed. It suggests that postural responses are invariant for the structure of the moving environment. The amplitude of the postural responses did not depend on the velocity of the simulated motion, and therefore did not depend on the absolute magnitude of the optic flow components. The amount of texture in the moving scene proved to be an important factor. In addition, it was found that the control of postural balance is not exclusively dominated by information provided by the peripheral part of the subject's visual field. Moreover, the results indicate that the divergence component in the optic-flow field alone is not sufficient to control posture in forward/backward direction.
1. In this study we have recorded the activity of motor units of the important muscles acting across the elbow joint during combinations of voluntary isometric torques in flexion/extension direction and supination/pronation direction at different angles of the elbow joint. 2. Most muscles are not activated homogeneously; instead the population of motor units of muscles can be subdivided into several subpopulations. Inhomogeneous activation of the population of motor units in a muscle is a general finding and is not restricted to some multifunctional muscles. 3. Muscles can be activated even if their mechanical action does not contribute directly to the external torque. For example, m. triceps is activated during supination torques and thus compensates for the flexion component of the m. biceps. On the other hand, motor units in muscles are not necessarily activated if their mechanical action contributes to a prescribed torque. For example, there are motor units in the m. biceps that are activated during flexion torques, but not during supination torques. 4. The relative activation of the muscles depends on the elbow angle. Changing the elbow angle affects the mechanical advantage of different muscles differently. In general, muscles with the larger mechanical advantage receive the larger input. 5. We have calculated the relative contributions of some muscles to isometric torques. These contributions depend on the combination of the torques exerted. 6. Existing theoretical models on muscle coordination do not incorporate subpopulations of motor units and therefore need to be amended.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.