2022
DOI: 10.3390/risks10030047
|View full text |Cite
|
Sign up to set email alerts
|

The Seven-League Scheme: Deep Learning for Large Time Step Monte Carlo Simulations of Stochastic Differential Equations

Abstract: We propose an accurate data-driven numerical scheme to solve stochastic differential equations (SDEs), by taking large time steps. The SDE discretization is built up by means of the polynomial chaos expansion method, on the basis of accurately determined stochastic collocation (SC) points. By employing an artificial neural network to learn these SC points, we can perform Monte Carlo simulations with large time steps. Basic error analysis indicates that this data-driven scheme results in accurate SDE solutions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…The method uses a Stochastic Collocation [10] (SC) based approach to approximate the target distribution. Then, artificial neural networks (ANNs) "learn" the proxy of the desired distribution, allowing for fast recovery [16,19].…”
Section: Swift Numerical Pricing Using Deep Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…The method uses a Stochastic Collocation [10] (SC) based approach to approximate the target distribution. Then, artificial neural networks (ANNs) "learn" the proxy of the desired distribution, allowing for fast recovery [16,19].…”
Section: Swift Numerical Pricing Using Deep Learningmentioning
confidence: 99%
“…Inspired by the works in [16,19], the method relies on the technique of Stochastic Collocation (SC) [10], which is employed to accurately approximate the targeted distribution by means of piecewise polynomials. Artificial neural networks (ANNs) are used for fast recovery of the coefficients which uniquely determine the piecewise approximation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, the inferred integrators are robust and generalizable beyond the training set (see Section 3). A few work of similar spirit can be found in [17,48,80], where neural network based approximations lead to large time-stepping integrators. However, the neural networks are computationally expensive to train and their parameters are often sensitive to training data, making it difficult to systematically investigate its properties such as the maximal admissible time step size of stability.…”
Section: Introductionmentioning
confidence: 99%
“…When the system is known, MDNet based on graph neural network in [80] enables the simulation of microcanonical (i.e. Hamiltonian) molecular dynamics with large steps; a stochastic collocation method in [48] and a parametric inference in [45] have lead to large time-stepping integrators for SDEs. This study extends the parametric inference approach in [45] to Hamiltonian systems.…”
Section: Introductionmentioning
confidence: 99%