The emergence of off-screen interaction devices is bringing the field of virtual reality to a broad range of applications where virtual objects can be manipulated without the use of traditional peripherals. However, to facilitate object interaction, other stimuli such as haptic feedback are necessary to improve the user experience. To enable the identification of virtual 3D objects without visual feedback, a haptic display based on a vibrotactile glove and multiple points of contact gives users an enhanced sensation of touching a virtual object with their hands. Experimental results demonstrate the capacity of this technology in practical applications.
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go. The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Two technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported.
Unmanned aerial vehicles (UAVs) represent an assistance solution for home care of dependent persons. These aircraft can cover the home, accompany the person, and position themselves to take photographs that can be analyzed to determine the person's mood and the assistance needed. In this context, this work principally aims to design a tool to aid in the development and validation of the navigation algorithms of an autonomous vision-based UAV for monitoring dependent people. For that, a distributed architecture has been proposed based on the real-time communication of two modules, one of them in charge of the dynamics of the UAV, the trajectory planning and the control algorithms, and the other devoted to visualizing the simulation in an immersive virtual environment. Thus, a system has been developed that allows the evaluation of the behavior of the assistant UAV from a technological point of view, as well as to carry out studies from the assisted person's viewpoint. An initial validation of a quadrotor model monitoring a virtual character demonstrates the advantages of the proposed system, which is an effective, safe and adaptable tool for the development of vision-based UAVs to help dependents at home.
Estimation of human emotions plays an important role in the development of modern brain-computer interface devices like the Emotiv EPOC+ headset. In this paper, we present an experiment to assess the classification accuracy of the emotional states provided by the headset’s application programming interface (API). In this experiment, several sets of images selected from the International Affective Picture System (IAPS) dataset are shown to sixteen participants wearing the headset. Firstly, the participants’ responses in form of a self-assessment manikin questionnaire to the emotions elicited are compared with the validated IAPS predefined valence, arousal and dominance values. After statistically demonstrating that the responses are highly correlated with the IAPS values, several artificial neural networks (ANNs) based on the multilayer perceptron architecture are tested to calculate the classification accuracy of the Emotiv EPOC+ API emotional outcomes. The best result is obtained for an ANN configuration with three hidden layers, and 30, 8 and 3 neurons for layers 1, 2 and 3, respectively. This configuration offers 85% classification accuracy, which means that the emotional estimation provided by the headset can be used with high confidence in real-time applications that are based on users’ emotional states. Thus the emotional states given by the headset’s API may be used with no further processing of the electroencephalogram signals acquired from the scalp, which would add a level of difficulty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.