2022
DOI: 10.48550/arxiv.2201.02928
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Frame invariant neural network closures for Kraichnan turbulence

Suraj Pawar,
Omer San,
Adil Rasheed
et al.

Abstract: Numerical simulations of geophysical and atmospheric flows have to rely on parameterizations of subgrid scale processes due to their limited spatial resolution. Despite substantial progress in developing parameterization (or closure) models for subgrid scale (SGS) processes using physical insights and mathematical approximations, they remain imperfect and can lead to inaccurate predictions. In recent years, machine learning has been successful in extracting complex patterns from high-resolution spatio-temporal… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 79 publications
(115 reference statements)
1
8
0
Order By: Relevance
“…A priori and a posteriori tests show that all these physics-constrained CNNs outperform the physics-agnostic CNN in the smalldata regime (n tr = 50). Improvements of the data-driven SGS closures using the GCNN, which uses an equivariant-preserving architecture, are consistent with the findings of a recent study [69], though it should be mentioned that DA, which simply builds equivariance in the training samples and can be used with any architecture, shows comparable or even in some cases better performance than GCNN. The spectra from the LES with physics-constrained CNNs better match the FDNS spectra especially at the tails (high-wavenumber structures).…”
Section: Summary and Discussionsupporting
confidence: 87%
See 2 more Smart Citations
“…A priori and a posteriori tests show that all these physics-constrained CNNs outperform the physics-agnostic CNN in the smalldata regime (n tr = 50). Improvements of the data-driven SGS closures using the GCNN, which uses an equivariant-preserving architecture, are consistent with the findings of a recent study [69], though it should be mentioned that DA, which simply builds equivariance in the training samples and can be used with any architecture, shows comparable or even in some cases better performance than GCNN. The spectra from the LES with physics-constrained CNNs better match the FDNS spectra especially at the tails (high-wavenumber structures).…”
Section: Summary and Discussionsupporting
confidence: 87%
“…While translation equivariance is already achieved in a regular CNN by weight sharing [92], rotational equivariance is not guaranteed. Recent studies show that rotational equivariance can actually be critical in data-driven SGS modeling [27,66,69,74]. To capture the rotational equivariance in the small-data regime, we propose two separate approaches: (1) DA, by including 3 additional rotated (by 90 • , 180 • , and 270 • ) counterparts of each original FDNS snapshot in the training set [74] and (2) by using a GCNN architecture, which enforces rotational equivariance by construction [92,93].…”
Section: Physics-constraint Cnns: Incorporating Rotational Equivarian...mentioning
confidence: 99%
See 1 more Smart Citation
“…Such NNs incorporating geometric symmetries have started to be applied to various systems 11,12 including fluid systems. [13][14][15][16][17] Super-resolution (SR) refers to methods of estimating highresolution images from low-resolution ones. Super-resolution is studied in computer vision as an application of NNs.…”
Section: Introductionmentioning
confidence: 99%
“…Siddai et al 15 developed an equivariant CNN and estimated three-dimensional steady flows around several particles from the particle positions, the mean flow, and the Reynolds number. Pawar et al 16 proposed a subgrid-scale model for the two-dimensional turbulence using an equivariant CNN. Their model was accurate and gave stable energy spectra for the various Reynolds numbers.…”
Section: Introductionmentioning
confidence: 99%