Research on virtual characters has been ongoing for the past 20 years. Early efforts focused mostly on making the characters move and speak-that is, on body and facial animation. Simultaneously, researchers worked on making characters look convincing by adding animation and rendering hair, clothes, and muscles. The next step was to increase artists' interactive control over characters so that it was easier to create convincing video games and cinema. Today, research into user interactivity has come to the forefront. It's no longer sufficient for characters to simply look like imitations of humans. They must behave like humans, too. This fact drives research into emotional and conversational virtual characters, or embodied conversational agents. The goal is to create a virtual character that has a human-like personality and that can emotionally respond while conversing with a user. To this end, some researchers mathematically model emotions, behavior, mood, and personality for virtual characters. As we describe here, researchers can use these models to create an emotionally responsive character. However, such models lack the critical component of memory-a memory of not just events but also past emotional interaction.We've developed a memory-based emotion model that uses the memory of past interactions to build long-term relationships between the virtual character and users. We combine this model with stateof-the-art animation blending to generate smooth animation for the character during the interaction. To make the interaction more natural, we also use face recognition techniques; the character can thus "remember" a user's face and automatically adjust the current interaction on the basis of its existing relationship with the user. Finally, to increase the user's immersion, we place a life-sized character in a real environment using marker-based augmented reality (AR) techniques. Our example application is Eva, a geography teacher who has multiple interactions with two student users. Modeling Realistic CharactersTo create realistic characters, we must create models based on three general aspects: emotion, mood and personality, and relationship. Modeling EmotionsEmotions have proven effects on cognitive processes such as action selection, learning, memory, motivation, and planning. Our emotions both motivate our decisions and have impact on our actions. As such, they're a key mechanism for controlling virtual-character behavior by both creating characters' personality and automatically producing animations by simulating characters' internal dynamics.Jonathan Gratch and Stacy Marsella define two methods for modeling emotion in lifelike characters: communicative-driven methods and simulation-based methods.1 Communicative-driven methods treat emotional displays as a means of communication. These systems don't internally calculate emotion; instead, they select an emoThe search for the perfect virtual character is on, but the moment users interact with characters, any illusion that we've found it is broken. Adding mem...
This paper presents a simple, three stage method to simulate the mechanics of wetting of porous solid objects, like sponges and cloth, when they interact with a fluid. In the first stage, we model the absorption of fluid by the object when it comes in contact with the fluid. In the second stage, we model the transport of absorbed fluid inside the object, due to diffusion, as a flow in a deforming, unstructured mesh. The fluid diffuses within the object depending on saturation of its various parts and other body forces. Finally, in the third stage, oversaturated parts of the object shed extra fluid by dripping. The simulation model is motivated by the physics of imbibition of fluids into porous solids in the presence of gravity. It is phenomenologically capable of simulating wicking and imbibition, dripping, surface flows over wet media, material weakening, and volume expansion due to wetting. The model is inherently mass conserving and works for both thin 2D objects like cloth and for 3D volumetric objects like sponges. It is also designed to be computationally efficient and can be easily added to existing cloth, soft body, and fluid simulation pipelines.
Our system takes as input a sketch (a), and a base mesh model (b), then recovers a camera to orient the base mesh (c), thenreconstructs the skeleton pose (d), and finally deforms the mesh to find the best possible match with the sketch (e). AbstractIn this paper, we present a novel system for facilitating the creation of stylized view-dependent 3D animation.Our system harnesses the skill and intuition of a traditionally trained animator by providing a convivial sketch based 2D to 3D interface. A base mesh model of the character can be modified to match closely to an input sketch, with minimal user interaction. To do this, we recover the best camera from the intended view direction in the sketch using robust computer vision techniques. This aligns the mesh model with the sketch. We then deform the 3D character in two stages -first we reconstruct the best matching skeletal pose from the sketch and then we deform the mesh geometry. We introduce techniques to incorporate deformations in the view-dependent setting. This allows us to set up view-dependent models for animation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.