Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis 2013
DOI: 10.1145/2503210.2503265
|View full text |Cite
|
Sign up to set email alerts
|

Solving the compressible navier-stokes equations on up to 1.97 million cores and 4.1 trillion grid points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
35
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(36 citation statements)
references
References 15 publications
0
35
0
1
Order By: Relevance
“…2π rû(r) dr (21) and an area-weighted equipartitioning of the flow rates,Û (m) = A (m) g /A gÛ , has been assumed. The accompanying thermoviscous functions are…”
Section: Ivb Thermoacoustic Stack In the X Directionmentioning
confidence: 99%
“…2π rû(r) dr (21) and an area-weighted equipartitioning of the flow rates,Û (m) = A (m) g /A gÛ , has been assumed. The accompanying thermoviscous functions are…”
Section: Ivb Thermoacoustic Stack In the X Directionmentioning
confidence: 99%
“…Performance may be improved by avoiding external node communication until exhausting the domain of dependence, allowing the calculation to advance multiple timesteps while requiring a smaller number of communication events. This idea is the basis of swept time-space decomposition [3,4].Extreme-scale computing clusters have recently been used to solve the compressible Navier-Stokes equations on over 1.97 million CPU cores [5]. The monetary cost, power consumption, and size of such a cluster impedes the realization of widespread peta-and exa-scale computing required for real-time, high-fidelity, CFD simulations.…”
mentioning
confidence: 99%
“…Extreme-scale computing clusters have recently been used to solve the compressible Navier-Stokes equations on over 1.97 million CPU cores [5]. The monetary cost, power consumption, and size of such a cluster impedes the realization of widespread peta-and exa-scale computing required for real-time, high-fidelity, CFD simulations.…”
mentioning
confidence: 99%
“…Computational systems no longer grow "upwards" with higher clock rates and faster memory access but have been growing "outwards" with massively distributed and parallel resources. For example, Stanford's Center for Turbulence Research has recently used 1.97 × 10 6 cores with approximately 1.6 petabytes of memory to find numerical solutions of the compressible Navier-Stokes equations [6]. We discuss the scalability of these kinds of computation systems more in Section III-A, but it is clear that this parallel computational paradigm calls for different types of algorithms than are used in the more traditional serial computational paradigm.…”
Section: Introductionmentioning
confidence: 99%