2020
DOI: 10.1080/10618562.2020.1785436
|View full text |Cite
|
Sign up to set email alerts
|

MPI Parallelisation of 3D Multiphase Smoothed Particle Hydrodynamics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Therefore, the best strategy for saving computing time is to reduce the computational complexity in numerical particle integration. This can be solved by multiple processors sharing the total numbers of particles, in other words, the domain decomposition algorithm (Cui et al, 2020). In order to achieve the parallelisation of level ice-ship interaction, the level ice (computing domain) discretised into numbers of particles are decomposed into np processors, for example 9 np  , as shown in Fig.…”
Section: Mpi Parallel Schemementioning
confidence: 99%
“…Therefore, the best strategy for saving computing time is to reduce the computational complexity in numerical particle integration. This can be solved by multiple processors sharing the total numbers of particles, in other words, the domain decomposition algorithm (Cui et al, 2020). In order to achieve the parallelisation of level ice-ship interaction, the level ice (computing domain) discretised into numbers of particles are decomposed into np processors, for example 9 np  , as shown in Fig.…”
Section: Mpi Parallel Schemementioning
confidence: 99%
“…For instance, DualSPHysics adopted OpenMP as the strategy of CPUs acceleration [107]. Nonetheless, the speedup of OpenMP could be limited caused by the increasing costs of the overhead of data communication from the shared memory to threads (see, e.g., [262,264]). Another typical SMSs framework is Intel's Thread Building Blocks (TBB) that was developed by the C++ language for parallel programming on multi-core processors.…”
Section: Accelerating Sph Simulation With Central Processing Units (Cpus)mentioning
confidence: 99%
“…In addition to the workload, another drawback of MPI is the expensive cost to build a massive multi-processor cluster that is less accessible for ordinary SPH practitioners. Despite this, the superiorities of MPI are evident; the most important one is that MPI can realize large-scale SPH simulations by employing substantial processors up to thousands and even tens of thousands as reported in [264,266,267].…”
Section: Accelerating Sph Simulation With Central Processing Units (Cpus)mentioning
confidence: 99%
“…Particle-based methods are often criticized for their high calculation costs [47,48]. Different parallel computing architectures such as MPI [28], OpenMP [49], OpenCL [50], and CUDA [51] are commonly used to alleviate this drawback. Among these parallel computing architectures, CUDA is the most popular framework that can take advantage of the GPU's computational horsepower.…”
Section: Gpu Implementationmentioning
confidence: 99%
“…This parallel computing architecture is relatively simple and widely adopted in SPH models [25,26]. The message-passing interface (MPI) can structure the CPUs of a multi-machine cluster into a multi-node framework and provides a feasible approach for massive-scale simulations [27,28]. Graphics processing units (GPUs) are specialized computer processors that are very effective at data-parallel computation-intensive tasks.…”
Section: Introductionmentioning
confidence: 99%