Abstract. A numerical cloud model with Lagrangian particles coupled to an Eulerian flow is adapted for distributed memory systems. Eulerian and Lagrangian calculations can be done in parallel on CPUs and GPUs, respectively. The fraction of time when CPUs and GPUs work simultaneously is maximized at around 80 % for an optimal ratio of CPU and GPU workloads. The optimal ratio of workloads is different for different systems because it depends on the relation between computing performance of CPUs and GPUs. GPU workload can be adjusted by changing the number of Lagrangian particles, which is limited by device memory. Lagrangian computations scale with the number of nodes better than Eulerian computations because the former do not require collective communications. This means that the ratio of CPU and GPU computation times also depends on the number of nodes. Therefore, for a fixed number of Lagrangian particles, there is an optimal number of nodes, for which the time CPUs and GPUs work simultaneously is maximized. Scaling efficiency up to this optimal number of nodes is close to 100 %. Simulations that use both CPUs and GPUs take between 10 and 120 times less time and use between 10 to 60 times less energy than simulations run on CPUs only. Simulations with Lagrangian microphysics take up to 8 times longer to finish than simulations with Eulerian bulk microphysics, but the difference decreases as more nodes are used. The presented method of adaptation for computing clusters can be used in any numerical model with Lagrangian particles coupled to an Eulerian fluid flow.
Abstract. A numerical cloud model with Lagrangian particles coupled to an Eulerian flow is adapted for distributed memory systems. Eulerian and Lagrangian calculations can be done in parallell on CPUs and GPUs, respectively. Scaling efficiency and the amount of parallelization of CPU and GPU calculations both exceed 50 % for up to 40 nodes. A sophisticated Lagrangian microphysics model slows down simulation by only 50 % compared to a simplistic bulk microphysics model, thanks to the use of GPUs. Overhead of communications between cluster nodes is mostly related to the pressure solver. Presented method of adaptation for computing clusters can be used in any numerical model with Lagrangian particles coupled to an Eulerian fluid flow.
<p>Lagrangian, particle-based models are an emerging method for detailed modeling of cloud microphysics. In these models, a relatively small number of "super-droplets" is used to represent all hydrometeors. Each super-droplet represents vast number of hydrometeors that have the same properties. The most popular method for solving collision-coalescence in these particle-based models is the all-or-nothing algorithm. In this algorithm, collision-coalescence of droplets within a spatial cell is modeled with a stochastic process. The number of trials is proportional to the number of super-droplets, which is significantly lower than the number of hydrometeors. Therefore the variance of the number of hydrometeors with a given size is higher in the super-droplet algorithm than it would be if every droplet was modeled separately. The increase of this variability depends on the number of super-droplets. We use the University of Warsaw Lagrangian Cloud Model (UWLCM) to analyse how the randomness in the collision-coalescence algorithm affects the amount of precipitation in large eddy simulations of warm clouds.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.