2016
DOI: 10.1016/j.jcp.2015.11.012
|View full text |Cite
|
Sign up to set email alerts
|

A paradigm for data-driven predictive modeling using field inversion and machine learning

Abstract: We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
219
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 466 publications
(241 citation statements)
references
References 21 publications
0
219
0
1
Order By: Relevance
“…Machine learning has been used to identify and model discrepancies in the Reynolds stress tensor between a RANS model and high-fidelity simulations (Ling & Templeton 2015;Parish & Duraisamy 2016;Ling et al 2016b;Xiao et al 2016;Singh et al 2017;. Ling & Templeton (2015) compare support vector machines, Adaboost decision trees, and random forests to classify and predict regions of high uncertainty in the Reynolds stress tensor.…”
Section: Parsimonious Nonlinear Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Machine learning has been used to identify and model discrepancies in the Reynolds stress tensor between a RANS model and high-fidelity simulations (Ling & Templeton 2015;Parish & Duraisamy 2016;Ling et al 2016b;Xiao et al 2016;Singh et al 2017;. Ling & Templeton (2015) compare support vector machines, Adaboost decision trees, and random forests to classify and predict regions of high uncertainty in the Reynolds stress tensor.…”
Section: Parsimonious Nonlinear Modelsmentioning
confidence: 99%
“…Xiao et al (2016) leveraged sparse online velocity measurements in a Bayesian framework to infer these discrepancies. In related work, Parish & Duraisamy (2016) develop the field inversion and machine learning modeling framework, that builds corrective models based on inverse modeling. This framework was later used by Singh et al (2017) to develop a neural network enhanced correction to the Spalart-Allmaras RANS model, with excellent performance.…”
Section: Parsimonious Nonlinear Modelsmentioning
confidence: 99%
“…Since a number of authors have begun to consider the use of machine/deep learning for problems in traditional computational physics, see e.g. [1,2,3,4,5,6,7,8,9,10,11,12], we are motivated to consider methodologies that constrain the interpolatory results of a network to be contained within a physically admissible region. Quite recently, [13] proposed adding physical constraints to generative adversarial networks (GANs) also considering projection as we do, while stressing the interplay between scientific computing and machine learning; we refer the interested reader to their work for even more motivation for such approaches.…”
Section: Introductionmentioning
confidence: 99%
“…In [39,48], corrections to reduced models are inferred with Bayesian inference for several different parameter configurations. The inferred corrections with the corresponding parameter configurations are used as a training set to learn a map from the parameters of the model to the corrections with supervised machine learning techniques.…”
mentioning
confidence: 99%
“…The inferred corrections with the corresponding parameter configurations are used as a training set to learn a map from the parameters of the model to the corrections with supervised machine learning techniques. The inference and learning approach presented in [39,48] is demonstrated on applications in the context of model reduction for turbulent flow models. The works [31,52] present a data assimilation framework for correcting the model bias of reduced models with data.…”
mentioning
confidence: 99%