Rendering and animating in real-time a multitude of articulated characters presents a real challenge and few hardware systems are up to the task. Up to now little research has been conducted to tackle the issue of real-time rendering of numerous virtual humans. This paper presents a hardware-independent technique that improves the display rate of animated characters by acting on the sole geometric and rendering information. We first review the acceleration techniques traditionally in use in computer graphics and highlight their suitability to articulated characters. Then we show how impostors can be used to render virtual humans. We introduce concrete case studies that demonstrate the effectiveness of our approach. Finally, we tackle the visibility issue.
R ecent innovations in interactive digital television 1 and multimedia products have enhanced viewers' ability to interact with programs and therefore to individualize their viewing experience. Designers for such applications need systems that provide the capability of immersing real-time simulated humans in games, multimedia titles, and film animations. The ability to place the viewer in a dramatic situation created by the behavior of other, simulated digital actors will add a new dimension to existing simulation-based products for education and entertainment on interactive TV. In the games market, convincing simulated humans rejuvenate existing games and enable the production of new kinds of games. Finally, in virtual reality (VR), representing participants by a virtual actor-self-representation in the virtual world-is an important factor for a sense of presence. This becomes even more important in multiuser environments, where effective interaction among participants contributes to the sense of presence. Even with limited sensor information, you can construct a virtual human frame in the virtual world that reflects the real body's activities. Slater and Usoh 2 indicated that such a body, even if crude, heightens the sense of presence. We have been working on simulating virtual humans for several years. Until recently, these constructs could not act in real time. Today, however, many applications need to simulate in real time virtual humans that look realistic. We have invested considerable effort in developing and integrating several modules into a system capable of animating humans in real-time situations. This includes interactive modules for building realistic individuals and a texture-fitting method suitable for all parts of the head and body. Animating the body, including the hands and their deformations, is the key aspect of our system; to our knowledge, no competing system integrates all these functions. We also included facial animation, as demonstrated below with virtual tennis players. Of course, real-time simulation has a price, demanding compromises. Table 1 compares the methods used for both types of actors, frame-by-frame and real-time. Real-time virtual-human simulation environments must achieve a close relationship between modeling and animation. In other words, virtual human modeling must include the structure needed for virtual human animation. We can separate the complete process broadly into three units: modeling, deformation, and motion control. We have developed a single system containing all the modules needed for simulating real-time virtual humans in distant virtual environments (VEs). Our system lets us rapidly clone any individual and animate the clone in various contexts. People cannot mistake our virtual humans for real ones, but we think them recognizable and realistic, as shown in the two case studies described later. We must also distinguish our approach from others. We simulate existing people. Compare this to Perlin's scripted virtual actors 3 or to virtual characters in games...
We present new techniques that use motion planning algorithms based on probabilistic roadmaps to control 22 degrees of freedom (DOFs) of human‐like characters in interactive applications. Our main purpose is the automatic synthesis of collision‐free reaching motions for both arms, with automatic column control and leg flexion. Generated motions are collision‐free, in equilibrium, and respect articulation range limits. In order to deal with the high (22) dimension of our configuration space, we bias the random distribution of configurations to favor postures most useful for reaching and grasping. In addition, extensions are presented in order to interactively generate object manipulation sequences: a probabilistic inverse kinematics solver for proposing goal postures matching pre‐designed grasps; dynamic update of roadmaps when obstacles change position; online planning of object location transfer; and an automatic stepping control to enlarge the character's reachable space. This is, to our knowledge, the first time probabilistic planning techniques are used to automatically generate collision‐free reaching motions involving the entire body of a human‐like character at interactive frame rates. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three‐Dimensional Graphics and Realism
We present new techniques that use motion planning algorithms based on probabilistic roadmaps to control 22 degrees of freedom (DOFs) of human-like characters in interactive applications. Our main purpose is the automatic synthesis of collision-free reaching motions for both arms, with automatic column control and leg flexion. Generated motions are collision-free, in equilibrium, and respect articulation range limits. In order to deal with the high (22) dimension of our configuration space, we bias the random distribution of configurations to favor postures most useful for reaching and grasping. In addition, extensions are presented in order to interactively generate object manipulation sequences: a probabilistic inverse kinematics solver for proposing goal postures matching pre-designed grasps; dynamic update of roadmaps when obstacles change position; online planning of object location transfer; and an automatic stepping control to enlarge the character's reachable space. This is, to our knowledge, the first time probabilistic planning techniques are used to automatically generate collision-free reaching motions involving the entire body of a human-like character at interactive frame rates. Categories and Subject Descriptors(according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism † Work done while at EPFL -Virtual Reality Lab nity. Most of the techniques developed 1 have not sufficiently explored this domain.The automatic generation of collision-free grasping sequences has several direct applications in virtual reality, games, and computer animation. And yet, producing collision-free grasping motions currently involves lots of tedious manual work from designers.Motion planning originated in Robotics, with an emphasis on the synthesis of collision-free motions for any sort of robotic structure 2 . Some works have applied mo-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.