Abstract. Previous studies have examined the experience of owning a virtual surrogate body or body part through specific combinations of cross-modal multisensory stimulation. Both visuomotor (VM) and visuotactile (VT) synchronous stimulation have been shown to be important for inducing a body ownership illusion, each tested separately or both in combination. In this study we compared the relative importance of these two cross-modal correlations, when both are provided in the same immersive virtual reality setup and the same experiment. We systematically manipulated VT and VM contingencies in order to assess their relative role and mutual interaction. Moreover, we present a new method for measuring the induced body ownership illusion through time, by recording reports of breaks in the illusion of ownership ('breaks') throughout the experimental phase. The balance of the evidence, from both questionnaires and analysis of the breaks, suggests that while VM synchronous stimulation contributes the greatest to the attainment of the illusion, a disruption of either (through asynchronous stimulation) contributes equally to the probability of a break in the illusion.
Agency, the attribution of authorship to an action of our body, requires the intention to carry out the action, and subsequently a match between its predicted and actual sensory consequences. However, illusory agency can be generated through priming of the action together with perception of bodily action, even when there has been no actual corresponding action. Here we show that participants can have the illusion of agency over the walking of a virtual body even though in reality they are seated and only allowed head movements. The experiment (n = 28) had two factors: Perspective (1PP or 3PP) and Head Sway (Sway or NoSway). Participants in 1PP saw a life-sized virtual body spatially coincident with their own from a first person perspective, or the virtual body from third person perspective (3PP). In the Sway condition the viewpoint included a walking animation, but not in NoSway. The results show strong illusions of body ownership, agency and walking, in the 1PP compared to the 3PP condition, and an enhanced level of arousal while the walking was up a virtual hill. Sway reduced the level of agency. We conclude with a discussion of the results in the light of current theories of agency.
We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g. between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g. we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g. how far I can reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. Here, we investigated the effects velocitydependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants' proprioceptive judgments of a virtual object's size, without affecting the perceived body-ownership, though affecting agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object's size, however the mismatched information did not affect either the sense of body-ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected.
Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.
Figure 1: Tracking data of facial movements is mapped onto the virtual face in real-time. We measure participants' feelings of ownership and agency over the virtual face, as well as perceived appeal.
With the development of increasingly sophisticated computer graphics, there is a continuous growth of the variety and originality of virtual characters used in movies and games. So far, however, their design has mostly been led by the artist’s preferences, not by perceptual studies. In this article, we explored how effective non-player character design can be used to influence gameplay. In particular, we focused on abstract virtual characters with few facial features. In experiment 1, we sought to find rules for how to use a character’s facial features to elicit the perception of certain personality traits, using prior findings for human face perception as a basis. In experiment 2, we then tested how perceived personality traits of a non-player character could influence a player’s moral decisions in a video game. We found that the appearance of the character interacting with the subject modulated aggressive behavior towards a non-present individual. Our results provide us with a better understanding of the perception of abstract virtual characters, their employment in video games, as well as giving us some insights about the factors underlying aggressive behavior in video games.
No abstract
Whether a visual stimulus seems near or far away depends partly on its vertical elevation. Contrasting theories suggest either that perception of distance could vary with elevation, because of memory of previous upwards efforts in climbing to overcome gravity, or because of fear of falling associated with the downwards direction. The vestibular system provides a fundamental signal for the downward direction of gravity, but the relation between this signal and depth perception remains unexplored. Here we report an experiment on vestibular contributions to depth perception, using Virtual Reality. We asked participants to judge the absolute distance of an object presented on a plane at different elevations during brief artificial vestibular inputs. Relative to distance estimates collected with the object at the level of horizon, participants tended to overestimate distances when the object was presented above the level of horizon and the head was tilted upward and underestimate them when the object was presented below the level of horizon. Interestingly, adding artificial vestibular inputs strengthened these distance biases, showing that online multisensory signals, and not only stored information, contribute to such distance illusions. Our results support the gravity theory of depth perception, and show that vestibular signals make an on-line contribution to the perception of effort, and thus of distance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.