BackgroundBody change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area.MethodologyTwenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective.ConclusionsThe results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental manipulation, and (c) significant positive correlations between these two measures. We discuss this result in the general context of body ownership illusions, and suggest applications including treatment for body size distortion illnesses.
Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation in cognitive neuroscience and psychology, concerning how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper, we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings.
No abstract
This paper focuses on the development and evaluation of a haptic enhanced virtual reality system which allows a human user to make physical handshakes with a virtual partner through a haptic interface. Multimodal feedback signals are designed to generate the illusion that a handshake with a robotic arm is a handshake with another human. Advanced controllers of the haptic interface are developed to respond to user behaviors online. Techniques to achieve online behavior generation are presented, such as a hidden-Markov-model approach to human interaction strategy estimation. Human-robot handshake experiments were carried out to evaluate the performance of the system. Two different approaches to haptic rendering were compared in experiments: a controller in basic mode with an embedded curve in the robot that disregards the human partner, and an interactive robot controller for online behavior generation. The two approaches were compared with the ground truth of another human driving the robot via teleoperation instead of the controller implementing a virtual partner. In the evaluation results, the human approach is rated to be most human-like, with the interactive controller following closely behind, followed by the controller in basic mode. This paper mainly concentrates on discussing the development of the haptic rendering algorithm for the handshaking system, its integration with visual and haptic cues, and reports about the results of subjective evaluation experiments that were carried out.
This paper presents experience of using our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the "destination-visitor" paradigm, which we define and motivate. We detail our heterogeneous system architecture, which spans over the three distributed and technologically-asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration.Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.
Abstract. This paper describes an experiment that studies the effect of basic haptic feedback in creating a sense of social interaction within a shared virtual environment (SVE). Although there have been a number of studies investigating the effect of haptic feedback on collaborative task performance, they do not address the effect it has in inducing social presence. The purpose of this experiment is to show that haptic feedback enhances the sense of social presence within a mediated environment. An experiment was carried out using a shared desktop based virtual environment where 20 remotely located couples who did not know one another had to solve a puzzle together. In 10 groups they had shared haptic communication through their hands, and in another group they did not. Hence the haptic feedback was not used for completing the task itself, but rather as a means of social interacting -communicating with the other participant. The results suggest that basic haptic feedback increases the sense of social presence within the shared VE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.