2022
DOI: 10.1103/physreve.105.065305
|View full text |Cite
|
Sign up to set email alerts
|

Hamiltonian neural networks for solving equations of motion

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
27
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 54 publications
(29 citation statements)
references
References 37 publications
0
27
0
Order By: Relevance
“…where q and p represent, respectively, position and momenta of discrete particles. A similar approach is followed in [70]. The same loss function in Eq.…”
Section: Neural Network For Conservative Systemsmentioning
confidence: 99%
“…where q and p represent, respectively, position and momenta of discrete particles. A similar approach is followed in [70]. The same loss function in Eq.…”
Section: Neural Network For Conservative Systemsmentioning
confidence: 99%
“…Supervised PINNs can be trained on data to learn nonlinear differential operators [14], discover differential equations [21,20], and solve inverse problems [1,4,18]. Unsupervised PINNs can be trained without using any labeled data to discover analytical and differentiable solutions of ordinary [12,16] or partial differential equations (PDEs) [22,11], and eigenvalue problems [7,19,13,8].…”
Section: Introductionmentioning
confidence: 99%
“…The NN consists of multiple hidden layers with trigonometric sin(•) function used as the activation function for the hidden neurons. This choice of activation has been shown to improve PINNs' performance in solving nonlinear dynamical systems [76] and high-dimensional partial differential equations [71]. The outputs of the NN are the solutions N x (t) ∈ R n and N u (t) ∈ R m .…”
mentioning
confidence: 99%
“…The outputs of the NN are the solutions N x (t) ∈ R n and N u (t) ∈ R m . We construct a neural state and control vectors that identically satisfies the initial conditions by parametrizing x(t) = x(0) + f (t)N x (t) and u(t) = u(0) + f (t)N u (t), where f (t) = 1 − e −t is a parametric function satisfying f (0) = 0 [76]. The network parameters, weights and biases, are randomly initialized and then, they are optimized by minimizing a physics-informed loss function defined by…”
mentioning
confidence: 99%
See 1 more Smart Citation