Data from experiments and direct simulations of turbulence have historically been used to calibrate simple engineering models such as those based on the Reynolds-averaged Navier-Stokes (RANS) equations. In the past few years, with the availability of large and diverse datasets, researchers have begun to explore methods to systematically inform turbulence models with data, with the goal of quantifying and reducing model uncertainties. This review surveys recent developments in bounding uncertainties in RANS models via physical constraints, in adopting statistical inference to characterize model coefficients and estimate discrepancy, and in using machine learning to improve turbulence models. Key principles, achievements and challenges are discussed. A central perspective advocated in this review is that by exploiting foundational knowledge in turbulence modeling and physical constraints, data-driven approaches can yield useful predictive models. arXiv:1804.00183v3 [physics.flu-dyn]
Reynolds-averaged Navier-Stokes (RANS) equations are widely used in engineering turbulent flow simulations. However, RANS predictions may have large discrepancies due to the uncertainties in modeled Reynolds stresses. Recently, Wang et al. demonstrated that machine learning can be used to improve the RANS modeled Reynolds stresses by leveraging data from high fidelity simulations (Physics informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data. Physical Review Fluids. 2, 034603, 2017). However, solving for mean flows from the improved Reynolds stresses still poses significant challenges due to potential ill-conditioning of RANS equations with Reynolds stress closures. Enabling improved predictions of mean velocities are of profound practical importance, because often the velocity and its derived quantities (QoIs, e.g., drag, lift, surface friction), and not the Reynolds stress itself, are of ultimate interest in RANS simulations. To this end, we present a comprehensive framework for augmenting turbulence models with physics-informed machine learning, illustrating a complete workflow from identification of input features to final prediction of mean velocities. This work has two innovations. First, we demonstrate a systematic procedure to generate mean flow features based on the integrity basis for mean flow tensors. Second, we propose using machine learning to predict linear and nonlinear parts of the Reynolds stress tensor separately. Inspired by the finite polynomial representation of tensors in classical turbulence modeling, such a decomposition is instrumental in overcoming the ill-conditioning of RANS equations.Numerical tests demonstrated merits of the proposed framework.
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As
Machine learning (ML) provides novel and powerful ways of accurately and efficiently recognizing complex patterns, emulating nonlinear dynamics, and predicting the spatio-temporal evolution of weather and climate processes. Off-the-shelf ML models, however, do not necessarily obey the fundamental governing laws of physical systems, nor do they generalize well to scenarios on which they have not been trained. We survey systematic approaches to incorporating physics and domain knowledge into ML models and distill these approaches into broad categories. Through 10 case studies, we show how these approaches have been used successfully for emulating, downscaling, and forecasting weather and climate processes. The accomplishments of these studies include greater physical consistency, reduced training time, improved data efficiency, and better generalization. Finally, we synthesize the lessons learned and identify scientific, diagnostic, computational, and resource challenges for developing truly robust and reliable physics-informed ML models for weather and climate processes. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.
In computational fluid dynamics simulations of industrial flows, models based on the Reynoldsaveraged Navier-Stokes (RANS) equations are expected to play an important role in decades to come. However, model uncertainties are still a major obstacle for the predictive capability of RANS simulations. This review examines both the parametric and structural uncertainties in turbulence models. We review recent literature on data-free (uncertainty propagation) and data-driven (statistical inference) approaches for quantifying and reducing model uncertainties in RANS simulations. Moreover, the fundamentals of uncertainty propagation and Bayesian inference are introduced in the context of RANS model uncertainty quantification. Finally, the literature on uncertainties in scale-resolving simulations is briefly reviewed with particular emphasis on large eddy simulations.
With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a threedimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(10 7 ) particles.
Fast prediction of permeability directly from images enabled by image recognition neural networks is a novel pore-scale modeling method that has a great potential. This article presents a framework that includes (1) generation of porous media samples, (2) computation of permeability via fluid dynamics simulations, (3) training of convolutional neural networks (CNN) with simulated data, and (4) validations against simulations. Comparison of machine learning results and the ground truths suggests excellent predictive performance across a wide range of porosities and pore geometries, especially for those with dilated pores. Owning to such heterogeneity, the permeability cannot be estimated using the conventional Kozeny-Carman approach. Computational time was reduced by several orders of magnitude compared to fluid dynamic simulations. We found that, by including physical parameters that are known to affect permeability into the neural network, the physics-informed CNN generated better results than regular CNN, however improvements vary with implemented heterogeneity.Computation of pore-scale transport properties from pore-scale images is an important aspect of image-based pore-scale studies. Such computations are generally performed in two ways, i.e., direct simulation approach and simplified network approach. In the first approach, the microscopic transport equations are solved directly on the geometry shown by the porescale images to obtain averaged properties such as permeability, relative permeability, or dispersion coefficient. Both single and multiphase flows can be accounted for, and both reactive and non-reactive transport equations can be solved. This direct approach is generally considered to be more accurate, but the computational cost is very high. For processes such as multiphase flows and reactive transport with slow kinetics, it is nearly impossible to solve the governing equations in a medium of even a moderate size. Therefore, the second alternative approach is to first abstract the porous medium as a discrete network. By applying simplified flow and transport laws on the network, the computational cost to obtain averaged properties can be effectively lowered [11].Some transport properties of porous media such as permeability are solely functions of pore geometry. Therefore, it should be possible to predict them using a neural network approach, which is to develop a surrogate model that directly maps a pore geometry to physical properties. Such a task resembles that in image classification [12,13], where a model takes an image as input and give the classification label as output by recognizing the object in the image, e.g., cars, animals, or even subtypes thereof (i.e., car make or animal breed). Once constructed, such surrogate models can potentially enable fast prediction of physical properties of porous media without performing direct simulations or network calculations. The recent studies of chemical imaging of rocks also involve surrogate models. For example, Hao et al.[14] generated a molecula...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.