Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, this class of movement is very difficult to generate, even more so when a unique, individual movement style is required. We present a system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a person whose gesturing style we wish to animate. A tool-assisted annotation process is performed on the video, from which a statistical model of the person's particular gesturing style is built. Using this model and input text tagged with theme, rheme and focus, our generation algorithm creates a gesture script. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with additional detail. It then generates either kinematic or physically simulated motion based on this description. The system is capable of generating gesture animations for novel text that are consistent with a given performer's style, as was successfully validated in an empirical user study.
The implementation of a state-specific configuration-selective vibrational configuration interaction (cs-VCI) approach based on a polynomial representation of the potential energy surface is presented. Advantages over grid-based algorithms are discussed. A combination of a configuration selection criterion, the simultaneous exclusion of irrelevant configurations, and an internal contraction scheme allow to handle large variational spaces. A modified version of the iterative Jacobi-Davidson diagonalization has been used to determine relevant internal eigenpairs of the cs-VCI matrices in the selected space. Benchmark calculations are provided for systems with up to 2x10(7) configurations and three-mode couplings in the expansion of the potential.
Abstract. A significant goal in multi-modal virtual agent research is to determine how to vary expressive qualities of a character so that it is perceived in a desired way. The "Big Five" model of personality offers a potential framework for organizing these expressive variations. In this work, we focus on one parameter in this model -extraversion -and demonstrate how both verbal and non-verbal factors impact its perception. Relevant findings from the psychology literature are summarized. Based on these, an experiment was conducted with a virtual agent that demonstrates how language generation, gesture rate and a set of movement performance parameters can be varied to increase or decrease the perceived extraversion. Each of these factors was shown to be significant. These results offer guidance to agent designers on how best to create specific characters.
Embodied virtual reality faithfully renders users' movements onto an avatar in a virtual 3D environment, supporting nuanced nonverbal behavior alongside verbal communication. To investigate communication behavior within this medium, we had 30 dyads complete two tasks using a shared visual workspace: negotiating an apartment layout and placing model furniture on an apartment floor plan. Dyads completed both tasks under three different conditions: face-to-face, embodied VR with visible full-body avatars, and no embodiment VR, where the participants shared a virtual space, but had no visible avatars. Both subjective measures of users' experiences and detailed annotations of verbal and nonverbal behavior are used to understand how the media impact communication behavior. Embodied VR provides a high level of social presence with conversation patterns that are very similar to face-to-face interaction. In contrast, providing only the shared environment was generally found to be lonely and appears to lead to degraded communication.
Abstract. A key goal in agent research is to be able to generate multimodal characters that can reflect a particular personality. The Big Five model of personality provides a framework for codifying personality variation. This paper reviews findings in the psychology literature to understand how the Big Five trait of emotional stability correlates with changes in verbal and nonverbal behavior. Agent behavior was modified based on these findings and a perceptual study was completed to determine if these changes lead to the controllable perception of emotional stability in virtual agents. The results reveal how language variation and the use of self-adaptors can be used to increase or decrease the perceived emotional stability of an agent. Self-adaptors are movements that often involve self-touch, such as scratching or bending one's fingers backwards in an unnatural brace. These results provide guidance on how agent designers can create particular characters, including indicating that for particular personality types, it is important to also produce typically non-communicative gestural behavior, such as the self-adaptors studied.
Vibrational angular momentum terms within the Watson Hamiltonian are often considered negligible or are approximated by the zeroth order term of an expansion of the inverse of the effective moment of inertia tensor. A multimode expansion of this tensor up to second order has been used to study the impact of first and second order terms on the vibrational transitions of N(2)H(2) and HBeH(2)BeH. Comparison with experimental data is provided. The expansion of the tensor can be exploited to introduce efficient prescreening techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.