2022
DOI: 10.1016/j.engappai.2021.104652
|View full text |Cite
|
Sign up to set email alerts
|

Non-intrusive surrogate modeling for parametrized time-dependent partial differential equations using convolutional autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 45 publications
(29 citation statements)
references
References 48 publications
0
26
0
Order By: Relevance
“…It is essential to span the problem's parametric space effectively, thus sophisticated sampling methods are often utilized, such as the Latin Hypercube [69]. Many surrogate modeling techniques have been introduced over the past years, including linear [47,43,44] and nonlinear [37,38,50] dimensionality reduction methods. The selection of the appropriate method is problem dependent.…”
Section: Surrogate Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…It is essential to span the problem's parametric space effectively, thus sophisticated sampling methods are often utilized, such as the Latin Hypercube [69]. Many surrogate modeling techniques have been introduced over the past years, including linear [47,43,44] and nonlinear [37,38,50] dimensionality reduction methods. The selection of the appropriate method is problem dependent.…”
Section: Surrogate Modelmentioning
confidence: 99%
“…For instance, deep feedforward neural networks (FFNNs) have been successfully employed to construct response surfaces of quantities of interest in complex problems [32,33,34,35,36]. Convolutional neural networks (CNNs) in conjuction with FFNNs have been employed to predict the high-dimensional system response at different parameter instances [37,38,39]. In addition, recurrent neural networks demonstrated great potential in transient problems for propagating the state of the system forward in time without the need of solving systems of equations [40,41].…”
Section: Introductionmentioning
confidence: 99%
“…For the sake of generality we denote as W e , b e and W d , b d the weight matrices and bias vectors of the CAEs. A detailed explanation of the CAE architecture is presented in [49].…”
Section: Non-linear Dimensionality Reductionmentioning
confidence: 99%
“…In recent years, NIROMs DL-based have been the subject of several studies aiming to overcome the limitations of the linear projections methods [28], by recovering non-linear, low-dimensional manifolds [36,42]. Various methodologies have been proposed, including supervised and unsupervised DL techniques, to identify low-dimensional manifolds and nonlinear dynamic behaviors [49]. A common methodology applied in the frame of NIROMs DL-based is the convolutional autoencoders (CAE) for the non-linear dimensionality reduction, and the long short-term memory (LSMT) networks to predict the temporal evolution [7,50].…”
Section: Introductionmentioning
confidence: 99%
“…In order to optimize end-to-end, the forward model must be differentiable and computationally efficient. When this is not the case, an alternative approach is to train a surrogate neural network, f , to approximate the forward model [38,31,33,34]. However, even well-trained surrogates may result in errors when included in our end-to-end framework, due to the encoders' ability to learn to exploit the surrogate model.…”
Section: Introductionmentioning
confidence: 99%