Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprioperception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality.
Abstract. Virtual Environments (VE) are mainly visual experiments that exclude visually impaired people. In this paper we present an application that should allow almost everybody to "see" or at least to perceive 3D shapes. We will first describe the mandatory aspects of such an application, the tests we made and finally conclude by the results we obtained with geometrically basic shapes that are very promising.
Abstract. We present a system that exploits advanced Mixed and Virtual Reality technologies to create a surveillance and security system that could be also extended to define emergency prevention plans in crowdy environments. Surveillance cameras are carried by a mini Blimp which is tele-operated using an innovative Virtual Reality interface with haptic feedback. An interactive control room (CAVE) receives multiple video streams from airborne and fixed cameras. Eye tracking technology allows for turning the user's gaze into the main interaction mechanism; the user in charge can examine, zoom and select specific views by looking at them. Video streams selected at the control room can be redirected to agents equipped with a PDA. On-field agents can examine the video sent by the control center and locate the actual position of the airborne cameras in a GPS-driven map. The aerial video would be augmented with real-time 3D crowd to create more realist risk and emergency prevention plans. The prototype we present shows the added value of integrating AR/VR technologies into a complex application and opens up several research directions in the areas of tele-operation, Multimodal Interfaces, simulation, risk and emergency prevention plans, etc.
Gesture recognition is becoming a popular way of interaction, but still suffers of important drawbacks to be integrated in everyday life devices. One of these drawbacks is the activation of the recognition systemtrigger gesture -which is generally tiring and unnatural. In this paper, we propose two natural solutions to easily activate the gesture interaction. The first one requires a single action from the user: grasping a remote control to start interacting. The second one is completely transparent for the user: the gesture system is only activated when the user's gaze points to the screen, i.e. when s/he is looking at it. Our first evaluation with the 2 proposed solutions plus a default implementation suggests that the gaze estimation activation is efficient enough to remove the need of a trigger gesture in order to activate the recognition system.
Abstract. In this paper we present a telerehabilitation system aiming to help physiotherapists on the shoulder and elbow treatment. Our system is based on a two-arm haptic force feedback to avoid excessive efforts and discomfort with the spinal column and is remotely controlled by smart phone. The validation of our system, with the help of muscular effort measurements (EMG) and supervised by a physiotherapist, provides very promising results.
Figure 1: An example of trans-modal rendering application selected by our engine to render 3D digital content to visually impaired people.
AbstractNowadays, several techniques exist to render digital content such as graphics, audio, haptic, etc. Unfortunately, they require different faculties that cannot always be applied, e.g. providing a picture to a blind person would be useless.In this paper, we present a new multimodal rendering engine with a server web-connected to other devices to perform ubiquitous computing. In order to take advantage of user capabilities, we defined an ontology populated with the following elements: user, device, and information. Our system, with the help of this ontology, aims to select and launch automatically a rendering application.Several test case applications were implemented to render shape, text, and video information via audio, haptic, and sight channels. Validations demonstrate that our system is flexible, easily extensible, and shows promise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.