2020
DOI: 10.48550/arxiv.2003.00596
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Overcoming the curse of dimensionality in the numerical approximation of high-dimensional semilinear elliptic partial differential equations

Christian Beck,
Lukas Gonon,
Arnulf Jentzen

Abstract: Recently, so-called full-history recursive multilevel Picard (MLP) approximation schemes have been introduced and shown to overcome the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations (PDEs) with Lipschitz nonlinearities. The key contribution of this article is to introduce and analyze a new variant of MLP approximation schemes for certain semilinear elliptic PDEs with Lipschitz nonlinearities and to prove that the proposed approximation schemes ove… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 15 publications
(39 citation statements)
references
References 39 publications
0
39
0
Order By: Relevance
“…) specifies a part of the running and g ∈ C 1 (R d ; R) the terminal costs, and (X u s ) t≤s≤T denotes the unique strong solution to the controlled SDE (5) with initial condition X u t = x init . Throughout we assume that f and g are such that the expectation in ( 7) is finite, for all (x init , t) ∈ R d × [0, T ].…”
Section: Optimal Controlmentioning
confidence: 99%
See 2 more Smart Citations
“…) specifies a part of the running and g ∈ C 1 (R d ; R) the terminal costs, and (X u s ) t≤s≤T denotes the unique strong solution to the controlled SDE (5) with initial condition X u t = x init . Throughout we assume that f and g are such that the expectation in ( 7) is finite, for all (x init , t) ∈ R d × [0, T ].…”
Section: Optimal Controlmentioning
confidence: 99%
“…Consequently, we can elevate divergences between path measures to loss functions on vector fields. To wit, let D : P(C) × P(C) → R ≥0 ∪ {+∞} be a divergence 5 , where, as before, P(C) denotes the set of probability measures on C. Then, setting…”
Section: Divergences and Loss Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is one of the most challenging issues in applied mathematics to approximately solve highdimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision ε > 0 grows exponentially in the PDE dimension and/or the reciprocal of ε (cf., e.g., [42, Chapter 1] and [43,Chapter 9] for related concepts and cf., e.g., [4,5,7,19,29,32,33] for numerical approximation methods for nonlinear PDEs which do not suffer from the curse of dimensionality). Recently, certain deep learning based approximation methods for PDEs have been proposed and various numerical simulations for such methods suggest (cf., e.g., [1,2,3,8,9,10,12,13,14,15,17,18,21,26,27,28,30,34,39,40,41,44,45,46,48]) that deep neural network (DNN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating DNNs grows at most polynomially in both the PDE dimension d ∈ N = {1, 2, .…”
Section: Introductionmentioning
confidence: 99%
“…Our proofs of Theorem 1.1 above and Theorem 4.13 below, respectively, are based on an application of Proposition 3.10 in Grohs et al [23] (see (I)-(VI) in the proof of Proposition 4.8 in Subsection 4. 4…”
Section: Introductionmentioning
confidence: 99%