2019
DOI: 10.1017/jfm.2019.700
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: Unsteady flow fields over a circular cylinder are trained and predicted using four different deep learning networks: generative adversarial networks with and without consideration of conservation laws and convolutional neural networks with and without consideration of conservation laws. Flow fields at future occasions are predicted based on information of flow fields at previous occasions. Predictions of deep learning networks are conducted on flow fields at Reynolds numbers that were not informed during train… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
111
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 199 publications
(111 citation statements)
references
References 26 publications
0
111
0
Order By: Relevance
“…This method can be utilized for the inflow turbulence generation problem. In general, convolution neural networks (CNN) are used to better represent the spatial correlation of flow [13,14,15,16,17,18]. However, the typical CNN structure [16] generates blurred fields over time when the recursive prediction, which uses the predicted information as the input, is carried out.…”
Section: Introductionmentioning
confidence: 99%
“…This method can be utilized for the inflow turbulence generation problem. In general, convolution neural networks (CNN) are used to better represent the spatial correlation of flow [13,14,15,16,17,18]. However, the typical CNN structure [16] generates blurred fields over time when the recursive prediction, which uses the predicted information as the input, is carried out.…”
Section: Introductionmentioning
confidence: 99%
“…During training a mini batch size of mbs = 8 and learning rates of 0.00004 and 0.02 have been chosen for the generator and discriminator network. These values have proven themselves to be valuable in the work of Mathieu et al 24 , as well as in the study of Lee and You 25 .…”
Section: /18 4 Results and Discussionmentioning
confidence: 92%
“…where U , and V are the x-component and y-component of the velocity field respectively, and P is the scalar pressure field. m is the batch size, n x is the number of grid points along the x-direction, n y is the number of grid points along the y-direction, and L is the number of layers with trainable weights, and n l represents number of trainable weights in layer l. MSE is the mean squared error, and GS is gradient sharpening or gradient difference loss (GDL) (Mathieu et al, 2015;Lee and You, 2018). In this paper, we use gradient sharpening based on a central difference operator.…”
Section: Network Training and Hyper-parameter Studymentioning
confidence: 99%
“…Quantitative results are presented in Tables 3 and 4. 3.2.3 Shape, angle of attack, and Reynolds number variation with gradient sharpening To penalize the difference of the gradient in the loss function, and to address the lack of sharpness in predictions, we use gradient sharpening (GS) (Mathieu et al, 2015;Lee and You, 2018) in the loss functions combination and present the cost function over the training set as,…”
Section: Shape Angle Of Attack and Reynolds Number Variationmentioning
confidence: 99%