This paper presents an interactive motion adaptation scheme for close interactions between skeletal characters and mesh structures, such as moving through restricted environments, and manipulating objects. This is achieved through a new spatial relationship-based representation, which describes the kinematics of the body parts by the weighted sum of translation vectors relative to points selectively sampled over the surfaces of the mesh structures. In contrast to previous discrete representations that either only handle static spatial relationships, or require offline, costly optimization processes, our continuous framework smoothly adapts the motion of a character to large updates of the mesh structures and character morphologies on-the-fly, while preserving the original context of the scene. The experimental results show that our method can be used for a wide range of applications, including motion retargeting, interactive character control and deformation transfer for scenes that involve close interactions. Our framework is useful for artists who need to design animated scenes interactively, and modern computer games that allow users to design their own characters, objects and environments.
We propose 2D stick figures as aunified medium for visualizing and searching for human motion data. The stick figures can express awide rangeorhuman motion, and theyare easy to be drawn by people without any professional training.Inour interface,the user can browse overall motion by viewing the stick figureimagesgenerated from the database and retrieve them directly by using sketched stick figures as an input query.W es tarted with apreliminary surveytoobserve how people draw stick figures. Based on the rules observed from the user study, we developed an algorithm converting motion data to asequence of stick figures. The feature-based comparison method between the stick figures provides an interactive and progressive search for the users. Theyassist the user's sketching by showing the current retrieval result at eachs troke. We demonstrate the utility of the system with a user study,inwhichthe participants retrieved example motion segments from the database with 102 motion files by using our interface.
Figure 1: Morphable crowd models synthesize virtual crowds of any size and any length from input crowd data. The synthesized crowds can be interpolated to produce a continuous span of intervening crowd styles. AbstractCrowd simulation has been an important research field due to its diverse range of applications that include film production, military simulation, and urban planning. A challenging problem is to provide simple yet effective control over captured and simulated crowds to synthesize intended group motions. We present a new method that blends existing crowd data to generate a new crowd animation. The new animation can include an arbitrary number of agents, extends for an arbitrary duration, and yields a naturallooking mixture of the input crowd data. The main benefit of this approach is to create new spatio-temporal crowd behavior in an intuitive and predictable manner. It is accomplished by introducing a morphable crowd model that allows us to encode the formations and individual trajectories in crowd data. Then, its original spatiotemporal behavior can be reconstructed and interpolated at an arbitrary scale using our morphable model.
The standard C/C++ implementation of a spatial partitioning data structure, such as octree and quadtree, is often inefficient in terms of storage requirements particularly when the memory overhead for maintaining parentto-child pointers is significant with respect to the amount of actual data in each tree node. In this work, we present a novel data structure that implements uniform spatial partitioning without storing explicit parent-tochild pointer links. Our linkless tree encodes the storage locations of subdivided nodes using perfect hashing while retaining important properties of uniform spatial partitioning trees, such as coarse-to-fine hierarchical representation, efficient storage usage, and efficient random accessibility. We demonstrate the performance of our linkless trees using image compression and path planning examples.
No abstract
Figure 1: Morphable crowd models synthesize virtual crowds of any size and any length from input crowd data. The synthesized crowds can be interpolated to produce a continuous span of intervening crowd styles. AbstractCrowd simulation has been an important research field due to its diverse range of applications that include film production, military simulation, and urban planning. A challenging problem is to provide simple yet effective control over captured and simulated crowds to synthesize intended group motions. We present a new method that blends existing crowd data to generate a new crowd animation. The new animation can include an arbitrary number of agents, extends for an arbitrary duration, and yields a naturallooking mixture of the input crowd data. The main benefit of this approach is to create new spatio-temporal crowd behavior in an intuitive and predictable manner. It is accomplished by introducing a morphable crowd model that allows us to encode the formations and individual trajectories in crowd data. Then, its original spatiotemporal behavior can be reconstructed and interpolated at an arbitrary scale using our morphable model.
Figure 1: Our deformable motion allows animated characters to navigate through highly constrained environments in realtime interactive control system. (left) Many cylinder world. (middle) Jump and climb. (right) Probabilistic roadmap. AbstractWe present an interactive method that allows animated characters to navigate through cluttered environments. Our characters are equipped with a variety of motion skills to clear obstacles, narrow passages, and highly constrained environment features. Our control method incorporates a behavior model into well-known, standard path planning algorithms. Our behavior model, called deformable motion, consists of a graph of motion capture fragments. The key idea of our approach is to add flexibility on motion fragments such that we can situate them into a cluttered environment via constraint-based formulation. We demonstrate our deformable motion for realtime interactive navigation and global path planning in highly constrained virtual environments.
In this paper, we propose a novel approach for the classification and retrieval of interactions between human characters and objects. We propose to use the interaction bisector surface (IBS) between the body and the object as a feature of the interaction. We define a multi-resolution representation of the body structure, and compute a correspondence matrix hierarchy that describes which parts of the character's skeleton take part in the composition of the IBS and how much they contribute to the interaction. Key-frames of the interactions are extracted based on the evolution of the IBS and used to align the query interaction with the interaction in the database. Through the experimental results, we show that our approach outperforms existing techniques in motion classification and retrieval, which implies that the contextual information plays a significant role for scene and interaction description. Our method also shows better performance than other techniques that use features based on the spatial relations between the body parts, or the body parts and the object. Our method can be applied for character motion synthesis and robot motion planning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.