a b s t r a c tWe propose a new fluid control technique that uses scale-dependent force control to preserve small-scale fluid detail. Control particles define local force fields and can be generated automatically from either a physical simulation or a sequence of target shapes. We use a multi-scale decomposition of the velocity field and apply control forces only to the coarse-scale components of the flow. Small-scale detail is thus preserved in a natural way avoiding the artificial viscosity often introduced by force-based control methods. We demonstrate the effectiveness of our method for both Lagrangian and Eulerian fluid simulation environments.
waLBerla is a massively parallel software framework for simulating complex flows with the lattice Boltzmann method (LBM). Performance and scalability results are presented for SuperMUC, the world's fastest x86-based supercomputer ranked number 6 on the Top500 list, and JUQUEEN, a Blue Gene/Q system ranked as number 5.We reach resolutions with more than one trillion cells and perform up to 1.93 trillion cell updates per second using 1.8 million threads. The design and implementation of waLBerla is driven by a careful analysis of the performance on current petascale supercomputers. Our fully distributed data structures and algorithms allow for efficient, massively parallel simulations on these machines. Elaborate node level optimizations and vectorization using SIMD instructions result in highly optimized compute kernels for the single-and two-relaxation-time LBM. Excellent weak and strong scaling is achieved for a complex vascular geometry of the human coronary tree.
In many applications involving incompressible fluid flow, the Stokes system plays an important role. Complex flow problems may require extremely fine resolutions, easily resulting in saddle-point problems with more than a trillion (10 12 ) unknowns. Even on the most advanced supercomputers, the fast solution of such systems of equations is a highly nontrivial and challenging task. In this work we consider a realization of an iterative saddle-point solver which is based mathematically on the Schur-complement formulation of the pressure and algorithmically on the abstract concept of hierarchical hybrid grids. The design of our fast multigrid solver is guided by an innovative performance analysis for the computational kernels in combination with a quantification of the communication overhead. Excellent node performance and good scalability to almost a million parallel threads are demonstrated on different characteristic types of modern supercomputers.1. Introduction. Current leading edge supercomputers can provide performance in the order of several petaflop/s, enabling the development of increasingly complex and accurate computational models having unprecedented size. This is especially relevant in flow simulations that may exhibit many small scale features that must be resolved over large domains. As an example, the problem of earth mantle convection is posed on a thick spherical shell of approximately 3 000 km depth and 6 300 km radius, resulting in an overall volume of close to a trillion, that is, 10 12 km 3 . A high resolution then results automatically in huge algebraic systems.Although finite element (FE) methods are flexible enough to handle different local mesh-sizes, fully adaptive meshing techniques require dynamic data structures and a complex program control flow that incurs significant computational cost. Recent work on parallel adaptive FE techniques can be found, e.g., in [1,2,11,44]. In [10] it is shown that an adaptive parallel FE method can reach locally 1 km resolution for the mantle convection problem on a large scale supercomputer. Here we will demonstrate that such a resolution can even be reached globally.Higher order FE approaches can lead to a better accuracy with the same number of unknowns, but the linear systems are denser. This implies more computational work, more memory access cost, and also higher parallel communication cost, so
This article presents new algorithms for massively parallel granular dynamics simulations on distributed memory architectures using a domain partitioning approach. Collisions are modelled with hard contacts in order to hide their micro-dynamics and thus to extend the time and length scales that can be simulated. The multi-contact problem is solved using a nonlinear block Gauss-Seidel method that is conforming to the subdomain structure. The parallel algorithms employ a sophisticated protocol between processors that delegate algorithmic tasks such as contact treatment and position integration uniquely and robustly to the processors. Communication overhead is minimized through aggressive message aggregation, leading to excellent strong and weak scaling. The robustness and scalability is assessed on three clusters including two peta-scale supercomputers with up to 458 752 processor cores. The simulations can reach unprecedented resolution of up to ten billion (10 10 ) non-spherical particles and contacts.
When designing and implementing highly efficient scientific applications for parallel computers such as clusters of workstations, it is inevitable to consider and to optimize the single-CPU performance of the codes. For this purpose, it is particularly important that the codes respect the hierarchical memory designs that computer architects employ in order to hide the effects of the growing gap between CPU performance and main memory speed. In this article, we present techniques to enhance the single-CPU efficiency of lattice Boltzmann methods which are commonly used in computational fluid dynamics. We show various performance results for both 2D and 3D codes in order to emphasize the effectiveness of our optimization techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.