2021
DOI: 10.4310/cms.2021.v19.n5.a1
|View full text |Cite
|
Sign up to set email alerts
|

A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients

Abstract: In recent years deep artificial neural networks (DNNs) have been successfully employed in numerical simulations for a multitude of computational problems including, for example, object and face recognition, natural language processing, fraud detection, computational advertisement, and numerical approximations of partial differential equations (PDEs). These numerical simulations indicate that DNNs seem to have the fundamental flexibility to overcome the curse of dimensionality in the sense that the number of re… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(50 citation statements)
references
References 66 publications
0
46
0
Order By: Relevance
“…As benchmark solution in our experiments we use the Euler approximation of the Riccati equation with 80 × 2 8 time steps and with 160 time steps for for X. The processes (Y, Z) are approximated by (31). In Figure 3, the approximation of (X, Y, Z) is compared to the analytic solution in mean, an empirical credibility interval (again, defined as the area between the 5:th and 95:th percentiles at each time point) as well as for a single path.…”
Section: Linear Quadratic Control Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…As benchmark solution in our experiments we use the Euler approximation of the Riccati equation with 80 × 2 8 time steps and with 160 time steps for for X. The processes (Y, Z) are approximated by (31). In Figure 3, the approximation of (X, Y, Z) is compared to the analytic solution in mean, an empirical credibility interval (again, defined as the area between the 5:th and 95:th percentiles at each time point) as well as for a single path.…”
Section: Linear Quadratic Control Problemsmentioning
confidence: 99%
“…The method has proven to be able to approximate a wide class of equations in very high dimensions (at least 100). Since the original Deep BSDE solver publication, several papers with adjustments of the algorithm as in e.g., [33,34,36,38,39,24,50,51] and others with convergence analysis in e.g., [25,27,28,29,30,31,32], have been published. In addition to being forward methods, these algorithms are global in their approximation, meaning that the optimization of all involved neural networks is carried out simultanously.…”
Section: Introductionmentioning
confidence: 99%
“…In generalization theory, it was shown that DNNs can achieve a dimension-independent error rate for solving PDEs [20,21,22,23]. These theoretical results have justified the application of DNNs to solve high-dimensional PDEs recently in [24,25,26,27,28,29,27,30,31]. Two main advantages of deep-learning-based methods presented in these studies are summarizes as follows: firstly, the curse of dimensionality can be weakened or even be overcome in certain classes of PDEs [32,33]; secondly, deep-learning-based PDE solvers are mesh-free without tedious mesh generation for complex domains in traditional solvers.…”
Section: Introduction 1problem Statementmentioning
confidence: 99%
“…A notorious challenge that appears in the numerical treatment of PDEs is the curse of dimensionality, suggesting that the computational complexity increases exponentially in the dimension of the state space. In recent years, however, multiple numerical [19,35,69] as well as theoretical studies [26,40] have indicated that a combination of Monte Carlo methods and neural networks offers a promising way to overcome this problem. This paper centers around two strategies that allow for solving quite general nonlinear PDEs:…”
Section: Introductionmentioning
confidence: 99%
“…For rigorous results towards the capability of neural networks to overcome the curse of dimensionality we refer to [25,37,40], each analyzing certain special PDE cases. Adding to the methods referred to above, let us also mention [78] as an alternative approach exploiting weak PDEs formulations, as well as [53], which approximates operators by neural networks (however relying on training data from reference solutions), where a typical application is to map an initial condition to a solution of a PDE.…”
Section: Introductionmentioning
confidence: 99%