Experiments are reported that assessed the ability of people, without vision, to locate the positions of objects from imagined points of observation that are related to their actual position by rotational or translational components. Theoretical issues addressed were whether spatial relations stored in an object-to-object system are directly retrieved or whether retrieval is mediated by a body-centered coordinate system, and whether body-centered access involves a process of imaging updating of self-position. The results, with those of Rieser (1989), indicate that in the case of regularly structured object arrays, interobject relations are directly retrieved for the translation task, but for the rotation task, retrieval occurs by means of a body-centered coordinate system, requiring imagined body rotation. For irregularly structured arrays, access of interobject spatial structure occurs by means of a body-centered coordinate system for both translation and rotation tasks, requiring imagined body translation or rotation. Array regularity affected retrieval of spatial structure in terms of global shape of interobject relations and local object position within global shape.
Haptic cues from fmgertip contact with a stable surface attenuate body sway in subjects even when the contact forces are too small to provide physical support of the body. Weinvestigated how haptic cues derived from contact of a cane with a stationary surface at low force levels aids postural control in sighted and congenitally blind individuals. Five sighted (eyes closed) and five congenitally blind subjects maintained a tandem Romberg stance in five conditions: (1) no cane; (2,3) touch contact «2 N of applied force) while holding the cane in a vertical or slanted orientation; and (4, 5) force contact (as much force as desired) in the vertical and slanted orientations. Touch contact of a cane at force levels below those necessary to provide significant physical stabilization was as effective as force contact in reducing postural sway in all subjects, compared to the no-cane condition. Aslanted cane was far more effective in reducing postural sway than was a perpendicular cane. Cane use also decreased head displacement of sighted subjects far more than that of blind subjects. These results suggest that head movement control is linked to postural control through gaze stabilization reflexes in sighted subjects; such reflexes are absent in congenitally blind individuals and may account for their higher levels of head displacement.
This study assessed whether stationary auditory information could affect body and head sway (as does visual and haptic information) in sighted and congenitally blind people. Two speakers, one placed adjacent to each ear, significantly stabilized center-of-foot-pressure sway in a tandem Romberg stance, while neither a single speaker in front of subjects nor a head-mounted sonar device reduced center-of-pressure sway. Center-of-pressure sway was reduced to the same level in the two-speaker condition for sighted and blind subjects. Both groups also evidenced reduced head sway in the two-speaker condition, although blind subjects' head sway was significantly larger than that of sighted subjects. The advantage of the two-speaker condition was probably attributable to the nature of distance compared with directional auditory information. The results rule out a deficit model of spatial hearing in blind people and are consistent with one version of a compensation model. Analysis of maximum cross-correlations between center-of-pressure and head sway, and associated time lags suggest that blind and sighted people may use different sensorimotor strategies to achieve stability.
Explicit memory tests such as recognition typically access semantic, modality-independent representations, while perceptual implicit memory tests typically access presemantic, modality-specific representations. By demonstrating comparable cross-and within-modal priming using vision and haptics with verbal materials (Easton, Srinivas, & Greene, 1997), we recently questioned whether the representations underlying perceptual implicit tests were modality specific. Unlike vision and audition, with vision and haptics verbal information can be presented in geometric terms to both modalities. The present experiments extend this line of research by assessing implicit and explicit memory within and between vision and haptics in the nonverbal domain, using both 2-Dpatterns and 3-Dobjects. Implicit test results revealed robust cross-modal priming for both 2-Dpatterns and 3-Dobjects, indicating that vision and haptics shared abstract representations of object shape and structure. Explicit test results for 3-D objects revealed modality specificity, indicating that the recognition system keeps track of the modality through which an object is experienced.
Two experiments were performed under visual-only and visual-auditory discrepancy conditions (dubs) to assess observers' abilities to read speech information on a face. In the first experiment, identification and multiple choice testing were used. In addition, the relation between visual and auditory phonetic information was manipulated and related to perceptual bias. In the eecond experiment, the "eompellingn8ls" of the visual·auditory discrepancy as a single speech event was manipulated. Subjects alao rated the confidence they had that their perception of the lipped word was accurate. Results indicated that competing visual information exerted little effect on auditory speech recognition, but visual speech recognition was substantially interfered with when discrepant auditory information was present. The extent of auditory bias was found to be related to the abilities of observers to read speech under nondiscrepancy conditions, the magnitude of the visual-auditory discrepancy, and the compellingness of the visual-auditory discrepancy as a single event. Auditory bias during speech was found to be a moderately compelling conscious experience, and not simply a case of confused responding or guessing. Results were discussed in terms of current models of perceptual dominance and related to results from modality discordance during space perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.