2021
DOI: 10.1016/j.neunet.2020.12.028
|View full text |Cite
|
Sign up to set email alerts
|

A Dual-Dimer method for training physics-constrained neural networks with minimax architecture

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(13 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…PINN methodology [109,146,191], many other variants were proposed, as the variational hp-VPINN, as well as conservative PINN (CPINN) [71]. Another approach is physics-constrained neural networks (PCNNs) [97,168,200]. while PINNs incorporate both the PDE and its initial/boundary conditions (soft BC) in the training loss function, PCNNs, are "data-free" NNs, i.e.…”
Section: What the Pinns Arementioning
confidence: 99%
“…PINN methodology [109,146,191], many other variants were proposed, as the variational hp-VPINN, as well as conservative PINN (CPINN) [71]. Another approach is physics-constrained neural networks (PCNNs) [97,168,200]. while PINNs incorporate both the PDE and its initial/boundary conditions (soft BC) in the training loss function, PCNNs, are "data-free" NNs, i.e.…”
Section: What the Pinns Arementioning
confidence: 99%
“…• physics-informed NNs (PINNs) (Raissi et al, 2019;Yang and Perdikaris, 2019;Meng et al, 2020) • physics-constrained neural networks (PCNNs) (Zhu et al, 2019;Sun et al, 2020a;Liu and Wang, 2021) PINNs incorporate both the PDE and its initial/boundary conditions (soft BC) in the training loss function, while physics-constrained NNs, i.e. "datafree" NNs, enforce the initial/boundary conditions (hard BC) via a custom NN architecture while embedding the PDE in the training loss.…”
Section: What Are Pinnsmentioning
confidence: 99%
“…We should note that the authors in [25] only assume one type of boundary condition in their method. Liu and Wang [26] proposed to update these hyperparameters (i.e.,λ Ω , λ ∂Ω ) using gradient ascent while updating the parameters of their neural network model using a dual-dimer saddle point search algorithm. Following the approach presented in [26], McClenny and Braga-Neto [27] applied hyperparameters to individual training points in the domain and on the boundary.…”
Section: Physics-informed Neural Networkmentioning
confidence: 99%
“…Liu and Wang [26] proposed to update these hyperparameters (i.e.,λ Ω , λ ∂Ω ) using gradient ascent while updating the parameters of their neural network model using a dual-dimer saddle point search algorithm. Following the approach presented in [26], McClenny and Braga-Neto [27] applied hyperparameters to individual training points in the domain and on the boundary. We should note that reported results in [27] indicates that the author's approach marginally outperforms the empirical method proposed in [23].…”
Section: Physics-informed Neural Networkmentioning
confidence: 99%