Degradation of the visual system can lead to a dramatic reduction of mobility by limiting a person to his sense of touch and hearing. This paper presents the development of an obstacle detection system for visually impaired people. While moving in his environment the user is alerted to close obstacles in range. The system we propose detects an obstacle surrounding the user by using a multi-sonar system and sending appropriate vibrotactile feedback. The system aims at increasing the mobility of visually impaired people by offering new sensing abilities.
Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprioperception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality.
Expressive facial animation synthesis of human like characters has had many approaches with good results. MPEG-4 standard has functioned as the basis of many of those approaches. In this paper we would like to lay out the knowledge of some of those approaches inside an ontology in order to support the modeling of emotional facial animation in virtual humans (VH). Inside this ontology we will present MPEG-4 facial animation concepts and its relationship with emotion through expression profiles that utilize psychological models of emotions. The ontology allows storing, indexing and retrieving prerecorded synthetic facial animations that can express a given emotion. Also this ontology can be used a refined knowledge base in regards to the emotional facial animation creation. This ontology is made using Web Ontology Language and the results are presented as answered queries.
We want to present a multimodal user interface for interaction with the virtual environment back-projected on the large projection screen. We use the interaction metaphor of a "spell-casting" wizard (the user) using a "magic wand" to interact with the VR environment and to complete some tasks. Our contribution is an user interface, which tries to take advantage of the past experience of the user such as fairy-tales or fantasy movies 1 .
We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others.
Heading perception of an aircraft becomes uneasy under disturbing spatial disorientation for a single pilot performing the daily tasks inherent to a long term flight. Sleepiness, movements and other activities introduced by vital function necessity reduce the attention and the awareness of the current aircraft situation. This paper presents the development of a system which aims at decreasing the attention needs for maintaining an aircraft's attitude and take corrective action when the autopilot goes off bound. An embedded system has been integrated in the pilot's clothing. It sends vibro-tactile feedback to the pilot when his aircraft becomes off balance. The system also dynamically localizes the position of the actuators in order to insure a feedback constant in space independently form the pilot posture and movements. A series of tests have been conducted to validate the interest of this localization by showing a slight improvement in the response time needed to take corrective action. By increasing the pilot's own feeling about his plane's orientation, the system provides a complementary tool to improve exhausting long flight conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.