2019
DOI: 10.1016/j.jcp.2019.01.036
|View full text |Cite
|
Sign up to set email alerts
|

Data-driven discovery of PDEs in complex datasets

Abstract: Many processes in science and engineering can be described by partial differential equations (PDEs). Traditionally, PDEs are derived by considering first principles of physics to derive the relations between the involved physical quantities of interest. A different approach is to measure the quantities of interest and use deep learning to reverse engineer the PDEs which are describing the physical process.In this paper we use machine learning, and deep learning in particular, to discover PDEs hidden in complex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
93
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(100 citation statements)
references
References 27 publications
0
93
0
Order By: Relevance
“…A summary of these equations and of the dictionaries used is presented in Table 1. Our neural network is a feed forward fully connected neural network, 4 hidden layers, 50 neurons per hidden layer similar to [4], using the softplus [9] activation function as the nonlinearity. For optimization, we use the Adam optimizer with learning rate of 0.02 for the parameter φ and 0.002 for θ.…”
Section: Numerical Simulationsmentioning
confidence: 99%
“…A summary of these equations and of the dictionaries used is presented in Table 1. Our neural network is a feed forward fully connected neural network, 4 hidden layers, 50 neurons per hidden layer similar to [4], using the softplus [9] activation function as the nonlinearity. For optimization, we use the Adam optimizer with learning rate of 0.02 for the parameter φ and 0.002 for θ.…”
Section: Numerical Simulationsmentioning
confidence: 99%
“…The first, second and third equality are obvious given an appropriate choice of Φ depending on the Markov property of X and its moment conditions in Assumption 2. Actually, because of the existence and uniqueness of φ ∈ Φ such that the RHS of the first equality achieves minimum, we know that ( ) (38) From another perspective, we know that…”
Section: Appendixmentioning
confidence: 99%
“…Recent applications include empirical and theoretical asset pricing, reinforcement learning and Q-learning in solving dynamic programming problems such as optimal investment-consumption choice, option pricing and optimal trading strategies construction, e.g., [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28] and references therein. Numerical methods to solve PDEs and BSDEs or the related inverse problems can be found in [29], [30], [31], [32], [33], [34], [35], [36], [37], [38] and [39]. Machine learning based methods enjoy the advantage of being fast, able to handle large data sets and high dimensional problems.…”
Section: Introductionmentioning
confidence: 99%
“…Based on this, a deep Galerkin method was tested to solve PDEs including high-dimensional ones. Berg et al [Berg and Nyström (2018)] proposed a unified deep neural network approach to approximate solutions to PDEs and then used deep learning to discover PDEs hidden in complex data sets from measurement data [Berg and Nyström (2019)]. In general, a deep feed-forward neural networks can serve as a suitable solution approximators, especially for high-dimensional PDEs with complex domains.…”
Section: Introductionmentioning
confidence: 99%