Abstract-A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. This paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: scalability, overhead, performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.
An active flow control application on a realistic wing design could be leveraged by a scalable, fully implicit, unstructured, finite-element flow solver and high-performance computing resources. This article describes the active flow control application; summarizes the main features in the implementation of a massively parallel turbulent flow solver, PHASTA; and demonstrates the method's strong scalability at extreme scale.
The advances in high performance computing (HPC) have allowed direct numerical simulation (DNS) approach coupled with interface tracking methods (ITM) to perform high fidelity simulations of turbulent bubbly flows in various complex geometries. In this work, we have chosen the geometry of the pressurized water reactor (PWR) core subchannel to perform a set of interface tracking simulations (ITS) with fully resolved liquid turbulence. The presented research utilizes a massively parallel finite-element based code, PHASTA, for the subchannel geometry simulations of bubbly flow turbulence. The main objective for this research is to demonstrate the ITS capabilities in gaining new insight into bubble/turbulence interactions and assisting the development of improved closure laws for multiphase computational fluid dynamics (M-CFD). Both single-and two-phase turbulent flows were studied within a single PWR subchannel. The analysis of numerical results includes the mean gas and liquid velocity profiles, void fraction distribution and turbulent kinetic energy profiles. Two sets of flow rates and bubble sizes were used in the simulations. The chosen flow rates corresponded to the Reynolds numbers of 29,079 and 80,775 based on channel hydraulic diameter (D h) and mean velocity. The finite element unstructured grids utilized for these simulations include 53.8 million and 1.11 billion elements, respectively. This has allowed to fully resolve
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.