Expressive facial animation synthesis of human like characters has had many approaches with good results. MPEG-4 standard has functioned as the basis of many of those approaches. In this paper we would like to lay out the knowledge of some of those approaches inside an ontology in order to support the modeling of emotional facial animation in virtual humans (VH). Inside this ontology we will present MPEG-4 facial animation concepts and its relationship with emotion through expression profiles that utilize psychological models of emotions. The ontology allows storing, indexing and retrieving prerecorded synthetic facial animations that can express a given emotion. Also this ontology can be used a refined knowledge base in regards to the emotional facial animation creation. This ontology is made using Web Ontology Language and the results are presented as answered queries.
In recent years, 3D media have become more and more widespread and have been made available in numerous online repositories. A systematic and formal approach for representing and organizing shape-related information is needed to share 3D media, to communicate the knowledge associated to shape modeling processes and to facilitate its reuse in useful cross-domain usage scenarios. In this paper we present an initial attempt to formalize an ontology for digital shapes, called the Common Shape Ontology (CSO). We discuss about the rationale, the requirements and the scope of this ontology, we present in detail its structure and describe the most relevant choices related to its development. Finally, we show how the CSO conceptualization is used in domain-specific application scenarios.
Most of the efforts concerninggraphical representations ofhumans (Virtual Humans) have beenfocused on synthesizing geometryfor static or animated shapes. Thenext step is to consider a human bodynot only as a 3D shape, but as anactive semantic entity with features,functionalities, interaction skills,etc. We are currently working on anontology-based approach to makeVirtual Humans more active andunderstandable both for humans andmachines. The ontology for VirtualHumans we are defining will providethe “semantic layer” required toreconstruct, stock, retrieve, reuse andshare content and knowledge relatedto Virtual Humans
No abstract
Most of the efforts concerning graphical representations of humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3Dshape, but as an active semantic entity with features, functionalities, interaction skills, etc. In the framework of the AIM@SHAPE Network of Excellence we are currently working on an ontology-based approach to make Virtual Humans more active and understandable both for humans and machines. The ontology for Virtual Humans we are defining will provide the "semantic layer" required to reconstruct, stock, retrieve and reuse content and knowledge related to Virtual Humans. The connection between the semantic and the graphical data is achieved thanks to an intermediate layer based on anatomical features extracted from morphological shape analysis. The resulting shape descriptors can be used to derive higherlevel descriptors from the raw geometric data. High-level descriptors can then be used to control human models.
The creation of virtual reality applications and 3D environments is a complex task that requires good programming skills and expertise in computer graphics and many other disciplines. The complexity increases when we want to include complex entities such as virtual characters and animate them. In this paper we present a system that assists in the tasks of setting up a 3D scene and configuring several parameters affecting the behavior of virtual entities like objects and autonomous virtual humans. Our application is based on a visual programming paradigm, supported by a semantic representation, an ontology for virtual environments. The ontology allows us to store and organize the components of a 3D scene, together with the knowledge associated with them. It is also used to expose functionalities in the given 3D engine. Based on a formal representation of its components, the proposed architecture provides a scalable VR system. Using this system, non-experts can set up interactive scenarios with minimum effort; no programming skills or advanced knowledge is required. KeywordsInhabited virtual environments • Visual programming • Authoring tool • Ontologies
The use of inhabited Virtual Environments is continuously growing. People can embody a human-like avatar to participate inside these Virtual Environments or they can have personalized character acting as mediator; sometimes they can even customize it to some extent. Those Virtual Characters belong to the software owner, but they could be potentially shared, exchanged and individualized between participants, such as already proposed by Sony with Station Exchange. Technology with standards could significantly improve the exchange, the reuse and the creation of such Virtual Characters. However an optimal reuse is only possible if the main components of the characters: geometry, morphology, animation and behavior, are annotated with semantics. This may allow to users searching for specific models and customize them. Moreover search technology based on the Web Ontology Language (OWL) can be implemented to provide this type of service. In this paper we present the considerations to build an ontology that fulfills the mentioned proposes.
We aim to create a model of emotional reactive virtual humans. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.