To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.
The purpose of this study was to determine the precision of proprioceptive localization of the hand in humans. We derived spatial probability distributions which describe the precision of localization on the basis of three different sources of information: proprioceptive information about the left hand, proprioceptive information about the right hand, and visual information. In the experiment subjects were seated at a table and had to perform three different position-matching tasks. In each task, the position of a target and the position of an indicator were available in a different combination of two of these three sources of information. From the spatial distributions of indicated positions in these three conditions, we derived spatial probability distributions for proprioceptive localization of the two hands and for visual localization. For proprioception we found that localization in the radial direction with respect to the shoulder is more precise than localization in the azimuthal direction. The distributions for proprioceptive localization also suggest that hand positions closer to the shoulder are localized more precisely than positions further away. These patterns can be understood from the geometry of the arm. In addition, the variability in the indicated positions suggests that the shoulder and elbow angles are known to the central nervous system with a precision of 0.6-1.1 degrees. This is a considerably better precision than the values reported in studies on perception of these angles. This implies that joint angles, or quantities equivalent to them, are represented in the central nervous system more precisely than they are consciously perceived. For visual localization we found that localization in the azimuthal direction with respect to the cyclopean eye is more precise than localization in the radial direction. The precision of the perception of visual direction is of the order of 0.2-0.6 degrees.
To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.
In a previous study we investigated how the CNS combines simultaneous visual and proprioceptive information about the position of the finger. We found that localization of the index finger of a seen hand was more precise (a smaller variance) than could reasonably be expected from the precision of localization on the basis of vision only and proprioception only. This suggests that, in localizing the tip of the index finger of a seen hand, the CNS may make use of more information than proprioceptive information and visual information about the fingertip. In the present study we investigate whether this additional information stems from additional sources of sensory information. In experiment 1 we tested whether seeing an entire arm instead of only the fingertip gives rise to a more precise proprioceptive and/or visual localization of that fingertip. In experiment 2 we checked whether the presence of a structured visual environment leads to a more precise proprioceptive localization of the index finger of an unseen hand. In experiment 3 we investigated whether looking in the direction of the index finger of an unseen hand improves proprioceptive localization of that finger. We found no significant effect in any of the experiments. The results refute the hypothesis that the investigated effects can explain the previously reported very precise localization of a seen hand. This suggests that localization of a seen finger is based exclusively on proprioception and on vision of the finger. The results suggest that these sensory signals may contain more information than is described by the magnitude of their variances.
The lateralization of visual speech perception was examined in 3 experiments. Participants were presented with a realistic computer-animated face articulating 1 of 4 consonant-vowel syllables without sound. The face appeared at 1 of 5 locations in the visual field. The participants' task was to identify each test syllable. To prevent eye movement during the presentation of the face, participants had to carry out a fixation task simultaneously with the speechreading task. In one study, an eccentricity effect was found along with a small but significant difference in favor of the right visual field (left hemisphere). The same results were found with the face articulating nonlinguistic mouth movements (e.g., kiss). These results suggest that the left-hemisphere advantage is based on the processing of dynamic visual information rather than on the extraction of linguistic significance from facial movements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.