Expressive facial animation synthesis of human like characters has had many approaches with good results. MPEG-4 standard has functioned as the basis of many of those approaches. In this paper we would like to lay out the knowledge of some of those approaches inside an ontology in order to support the modeling of emotional facial animation in virtual humans (VH). Inside this ontology we will present MPEG-4 facial animation concepts and its relationship with emotion through expression profiles that utilize psychological models of emotions. The ontology allows storing, indexing and retrieving prerecorded synthetic facial animations that can express a given emotion. Also this ontology can be used a refined knowledge base in regards to the emotional facial animation creation. This ontology is made using Web Ontology Language and the results are presented as answered queries.
In recent years, 3D media have become more and more widespread and have been made available in numerous online repositories. A systematic and formal approach for representing and organizing shape-related information is needed to share 3D media, to communicate the knowledge associated to shape modeling processes and to facilitate its reuse in useful cross-domain usage scenarios. In this paper we present an initial attempt to formalize an ontology for digital shapes, called the Common Shape Ontology (CSO). We discuss about the rationale, the requirements and the scope of this ontology, we present in detail its structure and describe the most relevant choices related to its development. Finally, we show how the CSO conceptualization is used in domain-specific application scenarios.
Most of the efforts concerninggraphical representations ofhumans (Virtual Humans) have beenfocused on synthesizing geometryfor static or animated shapes. Thenext step is to consider a human bodynot only as a 3D shape, but as anactive semantic entity with features,functionalities, interaction skills,etc. We are currently working on anontology-based approach to makeVirtual Humans more active andunderstandable both for humans andmachines. The ontology for VirtualHumans we are defining will providethe “semantic layer” required toreconstruct, stock, retrieve, reuse andshare content and knowledge relatedto Virtual Humans
No abstract
Most of the efforts concerning graphical representations of humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3Dshape, but as an active semantic entity with features, functionalities, interaction skills, etc. In the framework of the AIM@SHAPE Network of Excellence we are currently working on an ontology-based approach to make Virtual Humans more active and understandable both for humans and machines. The ontology for Virtual Humans we are defining will provide the "semantic layer" required to reconstruct, stock, retrieve and reuse content and knowledge related to Virtual Humans. The connection between the semantic and the graphical data is achieved thanks to an intermediate layer based on anatomical features extracted from morphological shape analysis. The resulting shape descriptors can be used to derive higherlevel descriptors from the raw geometric data. High-level descriptors can then be used to control human models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.