2015
DOI: 10.1002/cpe.3717
|View full text |Cite
|
Sign up to set email alerts
|

Parallelizing and optimizing large‐scale 3D multi‐phase flow simulations on the Tianhe‐2 supercomputer

Abstract: Summary The lattice Boltzmann method (LBM) is a widely used computational fluid dynamics method for flow problems with complex geometries and various boundary conditions. Large‐scale LBM simulations with increasing resolution and extending temporal range require massive https://en.wikipedia.org/wiki/High-performance_computing#High-performance computing (HPC) resources, thus motivating us to port it onto modern many‐core heterogeneous supercomputers like Tianhe‐2. Although many‐core accelerators such as graphic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…The porosity and permeability of the core sample used in Li et al 121 are 7.5% and 200 nD, respectively, whereas those used in Vega and Brien 42 are 35% and 1.3 mD, respectively. The final oil recovery from CO 2 -EOR in Li et al 121 and Vega et al 40 are nearly 67 and 100%, respectively. In this study, the porosity and permeability of the used sample are closer to those of Vega and Brien.…”
Section: Sandstone Core Samplementioning
confidence: 91%
See 3 more Smart Citations
“…The porosity and permeability of the core sample used in Li et al 121 are 7.5% and 200 nD, respectively, whereas those used in Vega and Brien 42 are 35% and 1.3 mD, respectively. The final oil recovery from CO 2 -EOR in Li et al 121 and Vega et al 40 are nearly 67 and 100%, respectively. In this study, the porosity and permeability of the used sample are closer to those of Vega and Brien.…”
Section: Sandstone Core Samplementioning
confidence: 91%
“…62−71 Eshraghi et al 72 reported that the LBM model could be used in the simulation of dendritic solidification with 36 billion grid points, and the parallel efficiency was almost 100% using 1000−4068 CPU cores. Li et al 40 reported that the threedimensional (3D) multiphase LBM simulation cases (model size: 512 × 256 × 256) could achieve a parallel efficiency of approximately 60% with 400K CPU cores using a hybrid, heterogeneous, multiple data programming model.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The HARVEY code successfully addresses key challenges of image‐based hemodynamics on supercomputers, such as limited memory capacity and bandwidth, flexible load balancing, and scalability. In summary, most of previous HPC implementations mainly provided solutions to exploit cache‐based approach to reuse spatial data for accelerating the LBM computing,() however, those solutions are not optimized for low‐cost and particularly memory limited embedded platforms.…”
Section: Introductionmentioning
confidence: 99%