R ecent innovations in interactive digital television 1 and multimedia products have enhanced viewers' ability to interact with programs and therefore to individualize their viewing experience. Designers for such applications need systems that provide the capability of immersing real-time simulated humans in games, multimedia titles, and film animations. The ability to place the viewer in a dramatic situation created by the behavior of other, simulated digital actors will add a new dimension to existing simulation-based products for education and entertainment on interactive TV. In the games market, convincing simulated humans rejuvenate existing games and enable the production of new kinds of games. Finally, in virtual reality (VR), representing participants by a virtual actor-self-representation in the virtual world-is an important factor for a sense of presence. This becomes even more important in multiuser environments, where effective interaction among participants contributes to the sense of presence. Even with limited sensor information, you can construct a virtual human frame in the virtual world that reflects the real body's activities. Slater and Usoh 2 indicated that such a body, even if crude, heightens the sense of presence. We have been working on simulating virtual humans for several years. Until recently, these constructs could not act in real time. Today, however, many applications need to simulate in real time virtual humans that look realistic. We have invested considerable effort in developing and integrating several modules into a system capable of animating humans in real-time situations. This includes interactive modules for building realistic individuals and a texture-fitting method suitable for all parts of the head and body. Animating the body, including the hands and their deformations, is the key aspect of our system; to our knowledge, no competing system integrates all these functions. We also included facial animation, as demonstrated below with virtual tennis players. Of course, real-time simulation has a price, demanding compromises. Table 1 compares the methods used for both types of actors, frame-by-frame and real-time. Real-time virtual-human simulation environments must achieve a close relationship between modeling and animation. In other words, virtual human modeling must include the structure needed for virtual human animation. We can separate the complete process broadly into three units: modeling, deformation, and motion control. We have developed a single system containing all the modules needed for simulating real-time virtual humans in distant virtual environments (VEs). Our system lets us rapidly clone any individual and animate the clone in various contexts. People cannot mistake our virtual humans for real ones, but we think them recognizable and realistic, as shown in the two case studies described later. We must also distinguish our approach from others. We simulate existing people. Compare this to Perlin's scripted virtual actors 3 or to virtual characters in games...
In this paper we present a virtual tennis game. We describe the creation and modeling of the virtual humans and body deformations, also showing the real-time animation and rendering aspects of the avatars. We focus on the animation of the virtual tennis ball and the behavior of a synthetic, autonomous referee who judges the tennis games. The networked, collaborative, virtual environment system is described with special reference to its interfaces to driver programs. We also mention the virtual reality (VR) devices that are used to merge the interactive players into the virtual tennis environment, together with the equipment and technologies employed for this exciting experience. We conclude with remarks on personal experiences during the game and on future research topics to improve parts of the presented system.
Facial animation has been a topic of intensive research for more than three decades. Still, designing realistic facial animations remains to be a challenging task. Several models and tools have been developed so far to automate the design of faces and facial animations synchronized with speech, emotions, and gestures. In this article, we take a brief overview of the existing parameterized facial animation systems. We then turn our attention to facial expression analysis, which we believe is the key to improving realism in animated faces. We report the results of our research regarding the analysis of the facial motion capture data. We use an optical tracking system that extracts the 3D positions of markers attached at specific feature point locations. We capture the movements of these face markers for a talking person. We then form a vector space representation by using the principal component analysis of this data. We call this space "expression and viseme space." As a result, we propose a new parameter space for sculpting facial expressions for synthetic faces. Such a representation not only offers insight into improving realism of animated faces, but also gives a new way of generating convincing speech animation and blending between several expressions. Expressive facial animation finds a variety of applications ranging from virtual environments to entertainment and games. With the advances in Internet technology, the development of online sales assistants, Web navigation aides and Web-based interactive tutors is promising than ever before. We overview the recent advances in the field of facial animation on the Web, with a detailed look at the requirements for Web-based facial animation systems and various applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.