Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses.The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing DNS databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered:(1) the flow in the same periodic hills geometry yet at a lower Reynolds number, and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.
Reynolds-averaged Navier-Stokes (RANS) equations are widely used in engineering turbulent flow simulations. However, RANS predictions may have large discrepancies due to the uncertainties in modeled Reynolds stresses. Recently, Wang et al. demonstrated that machine learning can be used to improve the RANS modeled Reynolds stresses by leveraging data from high fidelity simulations (Physics informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data. Physical Review Fluids. 2, 034603, 2017). However, solving for mean flows from the improved Reynolds stresses still poses significant challenges due to potential ill-conditioning of RANS equations with Reynolds stress closures. Enabling improved predictions of mean velocities are of profound practical importance, because often the velocity and its derived quantities (QoIs, e.g., drag, lift, surface friction), and not the Reynolds stress itself, are of ultimate interest in RANS simulations. To this end, we present a comprehensive framework for augmenting turbulence models with physics-informed machine learning, illustrating a complete workflow from identification of input features to final prediction of mean velocities. This work has two innovations. First, we demonstrate a systematic procedure to generate mean flow features based on the integrity basis for mean flow tensors. Second, we propose using machine learning to predict linear and nonlinear parts of the Reynolds stress tensor separately. Inspired by the finite polynomial representation of tensors in classical turbulence modeling, such a decomposition is instrumental in overcoming the ill-conditioning of RANS equations.Numerical tests demonstrated merits of the proposed framework.
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As
Machine learning (ML) provides novel and powerful ways of accurately and efficiently recognizing complex patterns, emulating nonlinear dynamics, and predicting the spatio-temporal evolution of weather and climate processes. Off-the-shelf ML models, however, do not necessarily obey the fundamental governing laws of physical systems, nor do they generalize well to scenarios on which they have not been trained. We survey systematic approaches to incorporating physics and domain knowledge into ML models and distill these approaches into broad categories. Through 10 case studies, we show how these approaches have been used successfully for emulating, downscaling, and forecasting weather and climate processes. The accomplishments of these studies include greater physical consistency, reduced training time, improved data efficiency, and better generalization. Finally, we synthesize the lessons learned and identify scientific, diagnostic, computational, and resource challenges for developing truly robust and reliable physics-informed ML models for weather and climate processes. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.
Rare earth (or yttrium) doped BaCeO 3 has been widely investigated as a proton conducting material. Usually, the trivalent dopants are assumed to occupy the Ce 4+ -site, which introduces oxygen vacancies into the perovskite structure and furthers the protonic conductivity. Recent studies indicate the possibility of dopant incorporation on the Ba 2+ -site, which is unfavorable for protonic conductivity. In this work atomistic simulation techniques, especially the supercell approach, have been developed to investigate the questions of dopant site-selectivity and cation nonstoichiometry in doped BaCeO 3 . Our calculations predict that, on energetic grounds, Ba 2+ -site deficiency shifts trivalent dopant incorporation onto the Ba 2+ -site. These results confirm that the dopant partitioning or site-occupancy of trivalent dopants will be sensitive to the precise Ba/Ce ratio, and hence to the experimental processing conditions. The relative energies explain the experimentally observed "amphoteric" behavior of Nd with significant dopant partitioning over both Ba and Ce sites. Such partitioning reduces the concentration of oxygen vacancies, which, in turn, lowers proton uptake and decreases proton conductivity relative to dopant incorporation solely on the Ce 4+ site.
Fast prediction of permeability directly from images enabled by image recognition neural networks is a novel pore-scale modeling method that has a great potential. This article presents a framework that includes (1) generation of porous media samples, (2) computation of permeability via fluid dynamics simulations, (3) training of convolutional neural networks (CNN) with simulated data, and (4) validations against simulations. Comparison of machine learning results and the ground truths suggests excellent predictive performance across a wide range of porosities and pore geometries, especially for those with dilated pores. Owning to such heterogeneity, the permeability cannot be estimated using the conventional Kozeny-Carman approach. Computational time was reduced by several orders of magnitude compared to fluid dynamic simulations. We found that, by including physical parameters that are known to affect permeability into the neural network, the physics-informed CNN generated better results than regular CNN, however improvements vary with implemented heterogeneity.Computation of pore-scale transport properties from pore-scale images is an important aspect of image-based pore-scale studies. Such computations are generally performed in two ways, i.e., direct simulation approach and simplified network approach. In the first approach, the microscopic transport equations are solved directly on the geometry shown by the porescale images to obtain averaged properties such as permeability, relative permeability, or dispersion coefficient. Both single and multiphase flows can be accounted for, and both reactive and non-reactive transport equations can be solved. This direct approach is generally considered to be more accurate, but the computational cost is very high. For processes such as multiphase flows and reactive transport with slow kinetics, it is nearly impossible to solve the governing equations in a medium of even a moderate size. Therefore, the second alternative approach is to first abstract the porous medium as a discrete network. By applying simplified flow and transport laws on the network, the computational cost to obtain averaged properties can be effectively lowered [11].Some transport properties of porous media such as permeability are solely functions of pore geometry. Therefore, it should be possible to predict them using a neural network approach, which is to develop a surrogate model that directly maps a pore geometry to physical properties. Such a task resembles that in image classification [12,13], where a model takes an image as input and give the classification label as output by recognizing the object in the image, e.g., cars, animals, or even subtypes thereof (i.e., car make or animal breed). Once constructed, such surrogate models can potentially enable fast prediction of physical properties of porous media without performing direct simulations or network calculations. The recent studies of chemical imaging of rocks also involve surrogate models. For example, Hao et al.[14] generated a molecula...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.