In this paper, we augment existing techniques for simulating flexible objects to include models for crack initiation and propagation in three-dimensional volumes. By analyzing the stress tensors computed over a finite element model, the simulation determines where cracks should initiate and in what directions they should propagate. We demonstrate our results with animations of breaking bowls, cracking walls, and objects that fracture when they collide. By varying the shape of the objects, the material properties, and the initial conditions of the simulations, we can create strikingly different effects ranging from a wall that shatters when it is hit by a wrecking ball to a bowl that breaks in two when it is dropped on edge.
This paper describes algorithms for the animation of men and women performing three dynamic athletic behaviors: running, bicycling, and vaulting. We animate these behaviors using control algorithms that cause a physically realistic model to perform the desired maneuver. For example, control algorithms allow the simulated humans to maintain balance while moving their arms, to run or bicycle at a variety of speeds, and to perform a handspring vault. Algorithms for group behaviors allow a number of simulated bicyclists to ride as a group while avoiding simple patterns of obstacles. We add secondary motion to the animations with springmass simulations of clothing driven by the rigid-body motion of the simulated human. For each simulation, we compare the computed motion to that of humans performing similar maneuvers both qualitatively through the comparison of real and simulated video images and quantitatively through the comparison of simulated and biomechanical data.
In free-viewpoint video, the viewer can interactively choose his viewpoint in 3-D space to observe the action of a dynamic realworld scene from arbitrary perspectives. The human body and its motion plays a central role in most visual media and its structure can be exploited for robust motion estimation and efficient visualization. This paper describes a system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint. The actor's silhouettes are extracted from synchronized video frames via background segmentation and then used to determine a sequence of poses for a 3D human body model. By employing multi-view texturing during rendering, time-dependent changes in the body surface are reproduced in high detail. The motion capture subsystem runs offline, is non-intrusive, yields robust motion parameter estimates, and can cope with a broad range of motion. The rendering subsystem runs at real-time frame rates using ubiquous graphics hardware, yielding a highly naturalistic impression of the actor. The actor can be placed in virtual environments to create composite dynamic scenes. Free-viewpoint video allows the creation of camera fly-throughs or viewing the action interactively from arbitrary perspectives.
This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of a social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The roles and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.