2021
DOI: 10.1137/20m1318043
|View full text |Cite
|
Sign up to set email alerts
|

Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
324
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 647 publications
(424 citation statements)
references
References 22 publications
0
324
0
1
Order By: Relevance
“…However, we still lack the theoretical guarantees that such adaptive schemes converge, and there is no adaptive work for the parametric case. Finally, there is a need for PINN-specific training algorithms [110,112] and effective parallelization strategies [80], especially for massive GPUs.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we still lack the theoretical guarantees that such adaptive schemes converge, and there is no adaptive work for the parametric case. Finally, there is a need for PINN-specific training algorithms [110,112] and effective parallelization strategies [80], especially for massive GPUs.…”
Section: Discussionmentioning
confidence: 99%
“…The third and last DNN architecture we consider is the modified residual neural network (ModResNet), proposed by [112]. The forward pass of a scalar value ModResNet with L hidden layers is defined recursively by:…”
Section: Modified Residual Neural Networkmentioning
confidence: 99%
“…where MSE F corresponds to the Euler equation ( 3) including the entropy conditions, MSE In f low corresponds to the inflow boundary conditions on primitive variables, MSE ∇ρ| D corresponds to the density gradient and D ⊂ Ω is a small region of the computational domain, MSE p * corresponds to the pressure data on the wall surface. We also employ dynamic weights [44] denoted as ω i , i = 1, 2, • • • in front of all the MSE terms in the loss function. Hereafter, for brevity, the loss function J is denoted without showing its dependence on the network parameters Θ.…”
Section: Expansion Wave Problemmentioning
confidence: 99%
“…By trial and error, we found suitable weights (Wang, et al, 2020) that allow for good convergence of the loss function. The values are seen in Eq.…”
Section: Homogeneous Media Case Using Xpinnmentioning
confidence: 99%