2021
DOI: 10.48550/arxiv.2101.05555
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Non-intrusive Surrogate Modeling for Parametrized Time-dependent PDEs using Convolutional Autoencoders

Stefanos Nikolopoulos,
Ioannis Kalogeris,
Vissarion Papadopoulos

Abstract: This work presents a non-intrusive surrogate modeling scheme based on machine learning technology for predictive modeling of complex systems, described by parametrized time-dependent PDEs. For this type of problems, typical finite element solution approaches involve the spatiotemporal discretization of the PDE and the solution of the corresponding linear system of equations at each time step. Instead, the proposed method utilizes a convolutional autoencoder in conjunction with a feed forward neural network to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 40 publications
(43 reference statements)
0
7
0
Order By: Relevance
“…The CAE has been widely applied to reduced-order models in recent years and more details about this type of network along with schematic diagrams can be found in Gonzalez and Balajewicz [42], Xu and Duraisamy [43], Wu et al [44], Nikolopoulos et al [45]. In a nutshell, the CAE is a type of feed-forward neural network with convolutional layers that attempts to learn the identity map [46].…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…The CAE has been widely applied to reduced-order models in recent years and more details about this type of network along with schematic diagrams can be found in Gonzalez and Balajewicz [42], Xu and Duraisamy [43], Wu et al [44], Nikolopoulos et al [45]. In a nutshell, the CAE is a type of feed-forward neural network with convolutional layers that attempts to learn the identity map [46].…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…whilst the first use of a convolutional autoencoder came 16 years later and was applied to Burgers Equation, advecting vortices and lid-driven cavity flow [31]. In the few years since 2018, many papers have appeared, in which convolutional autoencoders have been applied to sloshing waves, colliding bodies of fluid and smoke convection [32]; flow past a cylinder [33][34][35]; the Sod shock test and transient wake of a ship [36]; air pollution in an urban environment [37][38][39]; parametrised time-dependent problems [40]; natural convection problems in porous media [41]; the inviscid shallow water equations [42]; supercritical flow around an airfoil [43]; cardiac electrophysiology [44]; multiphase flow examples [45]; the Kuramoto-Sivashinsky equation [46]; the parametrised 2D heat equation [47]; and a collapsing water column [48]. Of these papers, those which compare autoencoder networks with POD generally conclude that autoencoders can outperform POD [31,33], especially when small numbers of reduced variables are used [41][42][43][44].…”
Section: Introductionmentioning
confidence: 99%
“…Once the low-dimensional space has been found, the snapshots are projected onto this space, and the resulting reduced variables (either POD coefficients or latent variables of an autoencoder) can be used to train a neural network, which attempts to learn the evolution of the reduced variables in time (and/or their dependence on a set of parameters). From the references in this paper alone, many examples exist of feed-forward and recurrent neural networks having been used for the purpose of learning the evolution of time series data, for example, by Multi-layer perceptrons [12,13,40,41,43,[54][55][56][57][58][59][60], Gaussian Process Regression [11,45,[61][62][63] and Long-Short Term Memory networks [31,32,34,35,38,51,64]. When using these types of neural network to predict in time, if the reduced variables stray outside of the range of values encountered during training, the neural network can produce unphysical, divergent results [39,51,52,64,65].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, neural networks have been used to perform the interpolation, and examples of this for steady-state parametrised problems can be found in [37,38], both of whom use POD and multi-layer perceptrons, and in [39], who use POD and compare a number of different networks. Examples for time-dependent parametrised problems can be found in [40], who used feed-forward neural networks to model the viscous Burgers' equation; [41], who proposed a nested trio of networks to learn spatial patterns, temporal patterns and to learn the dependence on the model parameters; [42], who combine convolutional autoencoders with recurrent neural networks for Burgers' equation and the shallow water equations; [43,44], both of whom combine an autoencoder and a feed-forward neural network; and [45], who train an MLP with data from both highfidelity and low-fidelity models to improve the accuracy of the model. In this paper, we set the PredGAN and DA-PredGAN algorithms within a NIROM framework, using POD for the compression step and a GAN for learning how the dynamics depend on the model parameters.…”
Section: Introductionmentioning
confidence: 99%