2022
DOI: 10.1017/dce.2022.2
|View full text |Cite|
|
Sign up to set email alerts
|

Performance and accuracy assessments of an incompressible fluid solver coupled with a deep convolutional neural network

Abstract: The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers. Lately, DeepLearning, and especially convolutional neural networks (CNNs), has been introduced to solve this equation, leading to significant inference time reduction at the cost of a lack of guarantee on the accuracy of the solution.This drawback might lead to inaccuracies, potentially unstable simulations and prevent performing fair assessments of the CNN speedup for different n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 88 publications
0
4
0
Order By: Relevance
“…A CNN is a type of DL algorithm specialized in capturing spatiotemporal features from multi-dimensional data ( Ajuria Illarramendi et al, 2022 ; LeCun et al, 2015 ). The convolutional filters and local connectivity in CNNs facilitate a better capture of the influence of the local features of the input data on the results ( Liao et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…A CNN is a type of DL algorithm specialized in capturing spatiotemporal features from multi-dimensional data ( Ajuria Illarramendi et al, 2022 ; LeCun et al, 2015 ). The convolutional filters and local connectivity in CNNs facilitate a better capture of the influence of the local features of the input data on the results ( Liao et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…Ajuria-Illaramendi et al. (2022) also revealed that using LTL improves the robustness and generalisation capabilities of the temporal predictions made by neural networks for operating conditions far from those used during training. Here, we chose to use simpler MLPs with a time delay in the input, as they are less expensive and easier to implement and train than LSTM networks or RNNs.…”
Section: Introductionmentioning
confidence: 98%
“…While such an approach has shown promising results, difficulties still exist, especially for long-time predictions. To tackle this issue while avoiding the complexity of recurrent networks, the learning of shift operators with short input history has been improved using long-term losses (LTLs), first introduced by Tompson et al (2017) and further exploited by Ajuria-Illaramendi, Bauerheim & Cuenot (2022) and Colombo et al (2023). The main idea is to collect during training new data by advancing in time the current network, and compare the predictions with the database.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The editors and publisher of Data-Centric Engineering would like to include the Open Data badge in this article Ajuria Illarramendi E, et al (2022).…”
mentioning
confidence: 99%