2021
DOI: 10.48550/arxiv.2111.05458
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Which priors matter? Benchmarking models for learning latent dynamics

Abstract: Learning dynamics is at the heart of many important applications of machine learning (ML), such as robotics and autonomous driving. In these settings, ML algorithms typically need to reason about a physical system using high dimensional observations, such as images, without access to the underlying state. Recently, several methods have proposed to integrate priors from classical mechanics into ML models to address the challenge of physical reasoning from images. In this work, we take a sober look at the curren… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Addressing this limitation is an important line for future research with many promising results obtained recently (see, e.g., [4,6,7,14]). Another important research question is to identify the most efficient way to incorporate inductive biases from physics into modeling of dynamical systems, which is a topic of active debate at the moment [1,12].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Addressing this limitation is an important line for future research with many promising results obtained recently (see, e.g., [4,6,7,14]). Another important research question is to identify the most efficient way to incorporate inductive biases from physics into modeling of dynamical systems, which is a topic of active debate at the moment [1,12].…”
Section: Discussionmentioning
confidence: 99%
“…Some symplectic integration schemes [5,9,25] make additional assumptions such as the separability of the Hamiltonian. 1 A recent model called Non-separable Symplectic Neural Networks (NSSNN) [28] releases this assumption by an improved symplectic integrator which works well for both separable and non-separable Hamiltonians.…”
Section: Hamiltonian Neural Networkmentioning
confidence: 99%
“…Elements of physics-guided loss and physics-guided architecture are used in neural ordinary differential equations (NODE) and energy-conserving neural networks. In NODEs, explicit integration steps are performed in each layer of the NN as one step evaluation of a standard ODE solver [442][443][444][445][446]. In energyconserving NN, the structure of Lagrangian and Hamiltonian equations have been embedded into the NN construction to ensure an energy-conservative behavior, as reviewed by Lutter and Peters [447] and implemented in different structures in [442,[448][449][450][451][452][453][454][455][456].…”
Section: Residual Modelingmentioning
confidence: 99%
“…Elements of physics-guided loss and physics-guided architecture are used in Neural ordinary differential equations (NODE) and Energy-Conserving Neural Networks (ECNN). In NODEs, explicit integration steps are performed in each layer of the NN as one step evaluation of a common ODE solver [398][399][400][401][402]. In ECNN, the structure of Lagrangian and Hamiltonian equations have been embedded into the NN construction to ensure an energy conservative behavior, as reviewed by Lutter and Peters [403] and implemented in different structures in [398,[404][405][406][407][408][409][410][411][412].…”
Section: Physics Guided Machine Learningmentioning
confidence: 99%