2021
DOI: 10.5802/smai-jcm.74
|View full text |Cite
|
Sign up to set email alerts
|

Model Reduction And Neural Networks For Parametric PDEs

Abstract: We develop a general framework for data-driven approximation of input-output maps between infinitedimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
84
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 156 publications
(84 citation statements)
references
References 70 publications
(94 reference statements)
0
84
0
Order By: Relevance
“…However, other supervised machine learning techniques have potential to do so. For example, random feature maps and deep neural networks show promise in this regard (Bhattacharya et al, 2020;Nelsen & Stuart, 2020); incorporating these tools in the CES algorithm is a direction of current research.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…However, other supervised machine learning techniques have potential to do so. For example, random feature maps and deep neural networks show promise in this regard (Bhattacharya et al, 2020;Nelsen & Stuart, 2020); incorporating these tools in the CES algorithm is a direction of current research.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…It will be useful if we can learn this as a part of the training, i.e., extend the map F : {{U (τ ) : τ ∈ (0, t)}, ξ 0 }} → σ (t) t ∈ (0, T ). This has been successfully demonstrated in simple problems like Darcy flow [7], and remains a work in progress.…”
Section: Discussionmentioning
confidence: 82%
“…Now, the implementation of the multiscale problem above requires the calculation of the map F, and therefore the unit cell problem at each macroscopic point x and at each instant t; this is extremely expensive. Our idea is to learn the macroscopic constitutive behavior using model reduction and deep neural networks following the approach of Bhattacharya et al [7] by utilizing data generated by solutions of the unit cell problem over various strain histories obtained from an appropriate probability distribution in the space of strain histories. To do so, we observe that the unit cell problem in fact specifies the map…”
Section: Broad Overview Of Our Approachmentioning
confidence: 99%
See 2 more Smart Citations