When simulating large crowds, it is inevitable that the models and motions of many virtual characters will be cloned. However, the perceptual impact of this trade-off has never been studied. In this paper, we consider the ways in which an impression of variety can be created and the perceptual consequences of certain design choices. In a series of experiments designed to test people's perception of variety in crowds, we found that clones of appearance are far easier to detect than motion clones. Furthermore, we established that cloned models can be masked by color variation, random orientation, and motion. Conversely, the perception of cloned motions remains unaffected by the model on which they are displayed. Other factors that influence the ability to detect clones were examined, such as proximity, model type and characteristic motion. Our results provide novel insights and useful thresholds that will assist in creating more realistic, heterogeneous crowds.
No abstract
Matrix palette skinning (also known as skeletal subspace deformation) is a very popular real-time animation technique. So far, it has only been applied to the class of quasi-articulated objects, such as moving human or animal figures. In this paper, we demonstrate how to automatically construct skinning approximations of arbitrary precomputed animations, such as those of cloth or elastic materials. In contrast to previous approaches, our method is particularly well suited to input animations without rigid components. Our transformation fitting algorithm finds optimal skinning transformations (in a least-squares sense) and therefore achieves considerably higher accuracy for non-quasi-articulated objects than previous methods. This allows the advantages of skinned animations (e.g., efficient rendering, rest-pose editing and fast collision detection) to be exploited for arbitrary deformations.
The simulation of large crowds of humans is important in many fields of computer graphics, including real-time applications such as games, as they can breathe life into otherwise static scenes and enhance believability. We present a novel hybrid rendering system for crowds that solves the classic problem of degraded quality of image-based representations at close distances by building an impostor rendering system on top of a full, geometry-based, human animation system. This enables almost imperceptible switching between the two representations based on a "pixel to texel" ratio, with minimal popping artefacts. Seamless interchanges are further facilitated by exploiting programmable graphics hardware to efficiently enhance the realism and variety of the dynamically-lit impostors, thereby also improving on existing impostor techniques. To test our system, our virtual crowds are embedded in an urban simulation system (as shown in Figure 1). The results demonstrate a system capable of rendering large realistic crowds with the visual realism of a high-resolution geometry rendering system, but at a fraction of the rendering cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.