2021
DOI: 10.1016/j.jocs.2021.101408
|View full text |Cite
|
Sign up to set email alerts
|

Reduced order modeling for parameterized time-dependent PDEs using spatially and memory aware deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…The CNN model reached AUROC 0.92 while the traditional models, such as KNN, got AUROC only 0.84 ( Perng et al., 2019 ). This is because deep learning algorithms can remove many redundant dimensions by self-learning ( Kamand Kim,2017 ; Mücke et al.,2021 ), and multiclass classification problem can be resolved with SoftMax. We also noted that Ke Li et al.…”
Section: Discussionmentioning
confidence: 99%
“…The CNN model reached AUROC 0.92 while the traditional models, such as KNN, got AUROC only 0.84 ( Perng et al., 2019 ). This is because deep learning algorithms can remove many redundant dimensions by self-learning ( Kamand Kim,2017 ; Mücke et al.,2021 ), and multiclass classification problem can be resolved with SoftMax. We also noted that Ke Li et al.…”
Section: Discussionmentioning
confidence: 99%
“…One particular class of dimension reduction techniques is represented by autoencoders, and more generally by other architectures that rely on NNs. In the recent literature many achievements are brought by CAEs, and by extension Generative Adversarial Networks (GANS), Variational Autoencoders (VAEs), bayesian convolutional autoencoders [26]: in [40] convolutional autoencoders are utilized for dimensionality reduction and long-short Term Memory (LSTM) NNs or causal convolutional neural networks are used for time-stepping; in [24] the evolution of the dynamics and the parameter dependency is learned at the same time of the latent space with a forward NN and a CNNs on randomized SVD compressed snapshots, respectively; and in [55] spatial and temporal features are separately learned with a multi-level convolutional autoencoder.…”
Section: Manifold Learningmentioning
confidence: 99%
“…ROMs have been constructed without availability of a FOM by inferring the ROM from data using operator inference methods [64,78,11,52]. Machine learning methods like convolutional autoencoders have been used in a projective sense [47,66] and also inference methods [58,51] have been applied to obtain nonlinear low-dimensional approximations. Other nonlinear dimensionality reduction methods like quadratic manifolds [9,32] and diffusion maps [80] have also been leveraged.…”
Section: Introductionmentioning
confidence: 99%