50th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 2012
DOI: 10.2514/6.2012-722
|View full text |Cite
|
Sign up to set email alerts
|

GPU-accelerated Large-Eddy Simulation of Turbulent Channel Flows

Abstract: High performance computing clusters that are augmented with cost and power efficient graphics processing unit (GPU) provide new opportunities to broaden the use of large-eddy simulation technique to study high Reynolds number turbulent flows in fluids engineering applications. In this paper, we extend our earlier work on multi-GPU acceleration of an incompressible Navier-Stokes solver to include a large-eddy simulation (LES) capability. In particular, we implement the Lagrangian dynamic subgrid scale model and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…DeLeon and coworkers [28,29] recently demonstrated another GPU-based solver for LES of turbulent incompressible flows. The subgrid-scale terms were modeled using the Lagrangian dynamic Smagorinsky model.…”
Section: Turbulent Flowmentioning
confidence: 99%
“…DeLeon and coworkers [28,29] recently demonstrated another GPU-based solver for LES of turbulent incompressible flows. The subgrid-scale terms were modeled using the Lagrangian dynamic Smagorinsky model.…”
Section: Turbulent Flowmentioning
confidence: 99%
“…However, considering that many scientific computational tools had been previously developed in Fortran from a joint work of NVIDIA and PGI (The Port-land Group), a CUDA Fortran Compiler was made available in 2010, which is essentially a conventional FORTRAN compiler with CUDA extensions. After that, many works (Griebel and Zaspel 2010;DeLeon and Senocak 2012;Markesteijn, Semiletov, and Karabasov 2015;Zhu, Phillips, Spandan, Donners, Ruetsch, Romero, Ostilla-M´onico, Yang, Lohse, Verzicco, et al 2018;Kumar, Abdel-Majeed, and Annavaram 2019) have been developed in order to adapt and redesign the existing codes to the architecture of the GPU.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have made the technology of extension mature from single to several GPUs and even clusters [6][7][8], including the different speedups between explicit and implicit schemes [9], the variance among structured, unstructured and hybrid grids [10,11], the influence of single and double precision [12], as well as high-order schemes and high-fidelity methods attracting increasing attention [13][14][15][16][17][18]. Contributed by hardware's development, GPU has possessed the power of simulating more complicated problems, such as turbulence, where LES was studies earlier [19,20] but DNS was still in the infancy [21][22][23][24]. All the work involved the optimization of program performance, critical to accelerate codes on GPU.…”
Section: Introductionmentioning
confidence: 99%