2020
DOI: 10.21203/rs.3.rs-55125/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Universal Differential Equations for Scientific Machine Learning

Abstract: In the context of science, the well-known adage “a picture is worth a thousand words” might well be “a model is worth a thousand datasets.” Scientific models, such as Newtonian physics or biological gene regulatory networks, are human-driven simplifications of complex phenomena that serve as surrogates for the countless experiments that validated the models. Recently, machine learning has been able to overcome the inaccuracies of approximate modeling by directly learning the entire set of nonlinear interaction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
241
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 279 publications
(248 citation statements)
references
References 82 publications
(35 reference statements)
1
241
0
Order By: Relevance
“…Recently, it has been shown that neural networks can be used as function approximators to recover unknown constitutive relationships in a system of coupled ODEs. 52 , 54 Following this principle, we represent as an n layer-deep neural network with weights , activation function r , and the input vector as …”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, it has been shown that neural networks can be used as function approximators to recover unknown constitutive relationships in a system of coupled ODEs. 52 , 54 Following this principle, we represent as an n layer-deep neural network with weights , activation function r , and the input vector as …”
Section: Resultsmentioning
confidence: 99%
“…For most of the regions under consideration, were optimized by minimizing the loss function given in ( Equation 13 ). Minimization was employed using local adjoint sensitivity analysis 54 , 68 following a similar procedure outlined in a recent study 52 with the ADAM optimizer, 69 with a learning rate of 0.01. The iterations required for convergence varied based on the region considered and generally ranged from 40,000 to 100,000.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on the ideas in (Chen et al, 2018), a set of packages for computer language Julia (Bezanson et al, 2017) are combined to fit neural differential equation models (Rackauckas et al, 2019(Rackauckas et al, , 2020 to experimental data, leading to differential equations that can be solved by standard differential equation solvers (Rackauckas and Nie, 2017). However, the packages aim higher: they allow for a very general mixing of mechanistic models and neural differential equation models in the same framework, with possibilities for the user to choose whether only parameters in the neural network model are tuned, or parameters in both the mechanistic model and the neural differential equation.…”
Section: Sims 61mentioning
confidence: 99%
“…The idea of differentiating through the numerical solution of Newton’s equations of motion to obtain gradients that can be used to improve a learned force field has been discussed [28], but is yet to produce generally useful force fields. Conceptually the idea is related to recent work on neural differential equations [29, 30]. The idea is appealing because a large number of parameters can be improved at once, rather than the small numbers currently modified.…”
Section: Introductionmentioning
confidence: 99%