The CNS may use multimodal reference frames to combine proprioceptive, visual, and gravitational information. Indeed, spatial information could be encoded simultaneously with respect to egocentric and allocentric references such as the body axis and gravity, respectively. It has further been proposed that gravity might serve to align reference frames between different sensory modalities. We performed a series of experiments in which human subjects matched the orientation of a visual stimulus to a visual reference (visual-visual), a haptic stimulus to a haptic reference (haptic-haptic), or a visual stimulus to a haptic reference (visual-haptic). These tests were performed in a normal upright posture, with the body tilted with respect to gravity, and in the weightless environment of Earth orbit. We found systematic patterns of errors in the matching of stimulus orientations. For an upright posture on Earth, a classic oblique effect appeared in the visual-visual comparison, which was then amplified in the haptic-visual task. Leftward or rightward whole-body tilt on Earth abolished both of these effects, yet each persisted in the absence of gravity. Leftward and rightward tilt also produced asymmetric biases in the visual-haptic but not in the visual-visual or haptic-haptic responses. These results illustrate how spatial anisotropy can be molded by sensorimotor transformations in the CNS. Furthermore, the results indicate that gravity plays a significant, but nonessential role in defining the reference frames for these tasks. These results provide insight into how the nervous system processes spatial information between different sensory modalities.