Human perception is based on expectations. We expect visual upright and gravity upright, sensed through vision, vestibular and other sensory systems, to agree. Equally, we expect that visual and vestibular information about self-motion will correspond. What happens when these assumptions are violated? Tilting a person from upright so that gravity is not where it should be impacts both visually induced self-motion (vection) and the perception of upright. How might the two be connected? Using virtual reality, we varied the strength of visual orientation cues, and hence the probability of participants experiencing a visual reorientation illusion (VRI) in which visual cues to orientation dominate gravity, using an oriented corridor and a starfield while also varying head-on-trunk orientation and body posture. The effectiveness of the optic flow in simulating self-motion was assessed by how much visual motion was required to evoke the perception that the participant had reached the position of a previously presented target. VRI was assessed by questionnaire When participants reported higher levels of VRI they also required less visual motion to evoke the sense of traveling through a given distance, regardless of head or body posture, or the type of visual environment. We conclude that experiencing a VRI, in which visual-vestibular conflict is resolved and the direction of upright is reinterpreted, affects the effectiveness of optic flow at simulating motion through the environment. Therefore, any apparent effect of head or body posture or type of environment are largely indirect effects related instead, to the level of VRI experienced by the observer. We discuss potential mechanisms for this such as reinterpreting gravity information or altering the weighting of orientation cues.
Our visual system maintains a stable representation of object size when viewing distance, and thus retinal size, changes. Previous studies have revealed that the extent of an object's representation in V1 shows systematic deviations from strict retinotopy when the object is perceived to be at different distances. It remains unknown, however, to what degree V1 activity accounts for perceptual size constancy. We investigated the neural correlates of size-constancy using steady-state visually evoked potentials (SSVEP) known to originate in early visual cortex. Flickering stimuli of various sizes were placed at a viewing distance of 40 cm and stimuli twice as large were shown at 80 cm. Thus both sets of stimuli had identical retinal sizes. At a constant viewing distance, SSVEP amplitude increased as a function of increasing retinal size. Crucially, SSVEP was larger when stimuli of a given retinal size were presented at 80 cm compared with at 40 cm independent of flicker frequency. Experiments were repeated and extended in virtual reality. Our results agree with previous findings showing that V1 activity plays a role in size constancy. Furthermore, we estimated the degree of the neural correction for the SSVEP as being close to 50% of the perceptual size constancy. This was the case in all experiments, independent of the effectiveness of perceptual size constancy. We conclude that retinotopy in V1 does get quite massively adjusted by perceived size, but not to the same extent as perceptual judgments.
Self-motion information can be used to update spatial memory of location through an estimate of a change in position. Viewing optic flow alone can create Illusory self-motion or "vection." Early studies suggested that peripheral vision is more effective than central vision in evoking vection, but controlling for retinal area and perceived distance suggests that all retinal areas may be equally effective. However, the contributions of the far periphery, beyond 90°, have been largely neglected. Using a large-field Edgeless Graphics Geometry display (EGG, Christie, Canada, field of view ±112°) and systematically blocking central (±20° to ±90°) or peripheral (viewing through tunnels ±20° to ±40°) parts of the field, we compared the effectiveness of different retinal regions at evoking forwards linear vection. Fifteen participants indicated when they had reached the position of a previously presented target after visually simulating motion down a simulated corridor. The amount of simulated travel needed to match a given target distance was modelled with a leaky spatial integrator model to estimate gains (perceived/actual distance) and a spatial decay factor. When optic flow was presented only in the far periphery (beyond 90°) gains were significantly higher than for the same motion presented full field or in only the central field, resulting in accurate performance in the range of speeds associated with normal walking. The increased effectiveness of optic flow in the peripheral field alone compared to full-field motion is discussed in terms of emerging neurophysiological studies that suggest brain areas dedicated to processing information from the far peripheral field.
Here, we investigate how body orientation relative to gravity affects the perceived size of visual targets. When in virtual reality, participants judged the size of a visual target projected at simulated distances of between 2 and 10 m and compared it to a physical reference length held in their hands while they were standing or lying prone or supine. Participants needed to make the visual size of the target 5.4% larger when supine and 10.1% larger when prone, compared to when they were in an upright position to perceive that it matched the physical reference length. Needing to make the target larger when lying compared to when standing suggests some not mutually exclusive possibilities. It may be that while tilted participants perceived the targets as smaller than when they were upright. It may be that participants perceived the targets as being closer while tilted compared to when upright. It may also be that participants perceived the physical reference length as longer while tilted. Misperceiving objects as larger and/or closer when lying may provide a survival benefit while in such a vulnerable position.
When we perform a goal-directed movement, tactile sensitivity on the moving limb is reduced compared to during rest. This well established finding of movement-related tactile suppression is often investigated with psychophysical paradigms, using custom haptic actuators and highly constrained movement tasks. However, studying more naturalistic movement scenarios is becoming more accessible due to increased availability of affordable, off-the-shelf virtual reality (VR) hardware. Here, we present a first evaluation of consumer VR controllers (HTC Vive and Valve Index) for psychophysical testing using the built-in vibrotactile actuators. We show that participants’ tactile perceptual thresholds can generally be estimated through manipulation of controller vibration amplitude and frequency. When participants performed a goal-directed movement using the controller, vibrotactile perceptual thresholds increased compared to rest, in agreement with previous work and confirming the suitability of unmodified VR controllers for tactile suppression research. Our findings will facilitate investigations of tactile perception in dynamic virtual scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.