Abstract. This paper describes an experiment that was conducted to evaluate three interaction techniques aiming at interacting with large virtual environments using haptic devices with limited workspace: the Scaling technique, the Clutching technique, and the Bubble technique. Participants were asked to paint a virtual model as fast and as precisely as possible inside a CAVE, using a desktop haptic device. The results showed that the Bubble technique enabled both the quickest and the most precise paintings. It was also the most appreciated technique.
In this paper, we present preliminary results on the use of deep learning techniques to integrate the user's selfbody and other participants into a head-mounted video seethrough augmented virtuality scenario. It has been previously shown that seeing user's bodies in such simulations may improve the feeling of both self and social presence in the virtual environment, as well as user performance. We propose to use a convolutional neural network for real time semantic segmentation of users' bodies in the stereoscopic RGB video streams acquired from the perspective of the user. We describe design issues as well as implementation details of the system and demonstrate the feasibility of using such neural networks for merging users' bodies in an augmented virtuality simulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.