2020
DOI: 10.1007/s00162-019-00512-z
|View full text |Cite
|
Sign up to set email alerts
|

A priori analysis on deep learning of subgrid-scale parameterizations for Kraichnan turbulence

Abstract: In the present study, we investigate different data-driven parameterizations for large eddy simulation of two-dimensional turbulence in the a priori settings. These models utilize resolved flow field variables on the coarser grid to estimate the subgrid-scale stresses. We use data-driven closure models based on localized learning that employs multilayer feedforward artificial neural network (ANN) with point-to-point mapping and neighboring stencil data mapping, and convolutional neural network (CNN) fed by dat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
49
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 56 publications
(53 citation statements)
references
References 93 publications
1
49
0
Order By: Relevance
“…In order to gain online computational efficiency at the cost of offline simulations, a physical kernel approach can be imposed in training ML models to enable accurate but fast coarse scale simulation while still intelligently modeling the representation losses and residuals using deep neural networks (i.e., incorporating numerous functional forms of known closure kernels in the training as opposed to static phenomenological models [184,215,216,259]). The training data can be also compressed by using both linear (e.g., matrix factorization [121]) and nonlinear techniques (e.g., variational autoencoders [320], manifold learning [201,207,363] and stochastic neighbor embedding algorithms [140,194,336]), and the causality assessment tools [127] can be employed to explore the correlation between input kernels and output variables in order to remove irrelevant input features from the training.…”
Section: Hybrid Analysis and Modelingmentioning
confidence: 99%
“…In order to gain online computational efficiency at the cost of offline simulations, a physical kernel approach can be imposed in training ML models to enable accurate but fast coarse scale simulation while still intelligently modeling the representation losses and residuals using deep neural networks (i.e., incorporating numerous functional forms of known closure kernels in the training as opposed to static phenomenological models [184,215,216,259]). The training data can be also compressed by using both linear (e.g., matrix factorization [121]) and nonlinear techniques (e.g., variational autoencoders [320], manifold learning [201,207,363] and stochastic neighbor embedding algorithms [140,194,336]), and the causality assessment tools [127] can be employed to explore the correlation between input kernels and output variables in order to remove irrelevant input features from the training.…”
Section: Hybrid Analysis and Modelingmentioning
confidence: 99%
“…These models would not have been possible without the synergistic combination of both physics and machine learning. In the work of Pawar et al [19], an artificial neural network (multilayer perceptron) and a convolutional neural network are used to construct data-driven subgridscale closure models for two-dimensional turbulence. For the development of the models, they consider a number of spatial points and variables in learning the subgrid-scale stresses at a spatial point.…”
Section: Summary Of Articles In This Special Issuementioning
confidence: 99%
“…However, we do not dwell on other architectures since finding optimal architecture is not the objective of this study. Indeed, non-local approaches to neural network models have been suggested in recent literature (for instance Maulik & San 2017;Duraisamy 2020;Pawar et al 2020). Such non-local neural network models may benefit from exploiting multi-point correlations in resolved flow parameters and may be analogous to non-local mathematical models, such as deconvolutional LES models (see Stolz & Adams (1999), for instance).…”
Section: Numerics and Optimisationmentioning
confidence: 99%