2021
DOI: 10.1016/j.jcp.2021.110666
|View full text |Cite
|
Sign up to set email alerts
|

Physics-informed machine learning for reduced-order modeling of nonlinear problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 139 publications
(63 citation statements)
references
References 63 publications
0
63
0
Order By: Relevance
“…[4,49,57]. Other nonintrusive alternatives for defining φ have also been suggested, such as: Gaussian process regression [30], polynomial chaos expansions [35] and neural networks [14,61].…”
Section: General Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…[4,49,57]. Other nonintrusive alternatives for defining φ have also been suggested, such as: Gaussian process regression [30], polynomial chaos expansions [35] and neural networks [14,61].…”
Section: General Backgroundmentioning
confidence: 99%
“…Our construction is mostly inspired by the recent advancements in nonlinear approximation theory, e.g. [16,17,55], and the increasing use of deep-learning techniques for parametrized PDEs, as in [14,24,39,44].…”
Section: Introductionmentioning
confidence: 99%
“…In [53,46], latent dynamics and nonlinear mappings are modeled as neural ODEs and autoencoders, respectively; in [49,43,65,47], autoencoders are used to learn approximate invariant subspaces of the Koopman operator. Relatedly, there have been studies on learning direct mappings via e.g., a neural network, from problem-specific parameters to either latent states or approximate solution states [64,19,58,10,35,69], where the latent states are computed by using autoencoder or linear POD.…”
Section: Related Workmentioning
confidence: 99%
“…That is, for the given network architecture (Table 1), increasing the amount of training/validating data does not have significant effect on the performance of the framework. On the other hand, increasing the number of training/validating parameter instances in a way that is shown in Figure 7 has significantly improved the accuracy of the approximations for the other testing parameter instances {µ (10) , µ (11) , µ (12) }. This set of experiments essentially illustrates that, for a given network architecture, more accurate approximation can be achieved for testing parameter instances that lie in between training/validating parameter instances (i.e., interpolation in the parameter space) than for those that lie outside of training/validating parameter instances (i.e., extrapolation in the parameter space).…”
Section: Node Pnodementioning
confidence: 99%
See 1 more Smart Citation