Harnessing the power of modern multi-GPU architectures, we present a massively parallel simulation system based on the Material Point Method (MPM) for simulating physical behaviors of materials undergoing complex topological changes, self-collision, and large deformations. Our system makes three critical contributions. First, we introduce a new particle data structure that promotes coalesced memory access patterns on the GPU and eliminates the need for complex atomic operations on the memory hierarchy when writing particle data to the grid. Second, we propose a kernel fusion approach using a new Grid-to-Particles-to-Grid ( G2P2G ) scheme, which efficiently reduces GPU kernel launches, improves latency, and significantly reduces the amount of global memory needed to store particle data. Finally, we introduce optimized algorithmic designs that allow for efficient sparse grids in a shared memory context, enabling us to best utilize modern multi-GPU computational platforms for hybrid Lagrangian-Eulerian computational patterns. We demonstrate the effectiveness of our method with extensive benchmarks, evaluations, and dynamic simulations with elastoplasticity, granular media, and fluid dynamics. In comparisons against an open-source and heavily optimized CPU-based MPM codebase [Fang et al. 2019] on an elastic sphere colliding scene with particle counts ranging from 5 to 40 million, our GPU MPM achieves over 100x per-time-step speedup on a workstation with an Intel 8086K CPU and a single Quadro P6000 GPU, exposing exciting possibilities for future MPM simulations in computer graphics and computational science. Moreover, compared to the state-of-the-art GPU MPM method [Hu et al. 2019a], we not only achieve 2x acceleration on a single GPU but our kernel fusion strategy and Array-of-Structs-of-Array ( AoSoA ) data structure design also generalizes to multi-GPU systems. Our multi-GPU MPM exhibits near-perfect weak and strong scaling with 4 GPUs, enabling performant and large-scale simulations on a 1024 3 grid with close to 100 million particles with less than 4 minutes per frame on a single 4-GPU workstation and 134 million particles with less than 1 minute per frame on an 8-GPU workstation.
In this paper, we propose a novel integrated method for effective modeling and realistic enhancement of scale-sensitive fluid simulation details. The core of our method is the organic of multi-layer depth image regression analysis and fluid implicit particle fluid simulation of which the regression analysis induces the criterion where the fluid details should be produced. First, we capture the depth buffer of the fluid surface dynamically from the top of scene. Second, we employ depth peeling technique to decompose the target fluid volume into multiple depth layers and conduct time-space analysis over surface layers. Third, we propose a logistic regression-based model to rigorously pinpoint the complex interacting regions, wherein multiple detail-relevant factors are taken into account based on the captured multiple depth layers. Finally, details are enhanced by animating extra diffuse materials and augmenting the air-fluid mixing phenomenon. It is evident that, with depth peeling technology, we can afford rigorous analysis not only across surface layers at different fluid depth but along the depth direction as well. After integrating the analysis results from these two sources, we are capable of performing detail enhancement both on the fluid surface and inside the fluid to obtain a great visual effect, even when large occlusion exists. Directly benefiting from the flexibility of image-space-dominant processing, our unified framework can be entirely implemented on graphics processing units and thus achieves interactive performance. For various fluid phenomena with different diffuse materials (e.g., spray, foam, and bubble), comprehensive experiments and evaluations have demonstrated its superiority in high-fidelity fluid detail enhancement and its interaction with surrounding environment. KEYWORDS depth peeling, FLIP, fluid detail enhancement, GPU, image space method, time-space analysis model
Particle-based simulations are ubiquitous throughout many fields of computational science and engineering, spanning the atomistic level with molecular dynamics (MD), to mesoscale particle-in-cell (PIC) simulations for solid mechanics, device-scale modeling with PIC methods for plasma physics, and massive N-body cosmology simulations of galaxy structures, with many other methods in between (Hockney & Eastwood, 1989). While these methods use particles to represent significantly different entities with completely different physical models, many low-level details are shared including performant algorithms for short-and/or long-range particle interactions, multi-node particle communication patterns, and other data management tasks such as particle sorting and neighbor list construction.
We propose VRGym, a virtual reality (VR) testbed for realistic human-robot interaction. Different from existing toolkits and VR environments, the VRGym emphasizes on building and training both physical and interactive agents for robotics, machine learning, and cognitive science. VRGym leverages mechanisms that can generate diverse 3D scenes with high realism through physics-based simulation. We demonstrate that VRGym is able to (i) collect human interactions and fine manipulations, (ii) accommodate various robots with a ROS bridge, (iii) support experiments for human-robot interaction, and (iv) provide toolkits for training the state-of-the-art machine learning algorithms. We hope VRGym can help to advance general-purpose robotics and machine learning agents, as well as assisting human studies in the field of cognitive science. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.