2021
DOI: 10.1007/s10208-021-09514-y
|View full text |Cite
|
Sign up to set email alerts
|

Overcoming the Curse of Dimensionality in the Numerical Approximation of Parabolic Partial Differential Equations with Gradient-Dependent Nonlinearities

Abstract: Partial differential equations (PDEs) are a fundamental tool in the modeling of many real-world phenomena. In a number of such real-world phenomena the PDEs under consideration contain gradient-dependent nonlinearities and are high-dimensional. Such high-dimensional nonlinear PDEs can in nearly all cases not be solved explicitly, and it is one of the most challenging tasks in applied mathematics to solve high-dimensional nonlinear PDEs approximately. It is especially very challenging to design approximation al… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 49 publications
0
19
0
Order By: Relevance
“…Next, (39), Jensen's inequality, a Lyapunov-type estimate (see, e.g., [15, Lemma 2.2]) combined with (35) and (38), the fact that c ≤ ρ, and Lemma 2.1 (applied with β β − 1 in the notation of Lemma 2.1) combined with (37)-( 39), (33), the assumption that κ ∈ [0, p/(3β + 1)], and the fact that ∀ h ∈ (0, T ], x ∈ D h : ϕ(x) ≤ c(b 3 h) −κ (see (45)) imply that for all t ∈ [t 0 , T ], q ∈ [1, ∞) with βq ≤ p it holds that µ(Y θ,x 0 t 0 ,t )…”
Section: Strong Error Estimates For Approximations Of Sdesmentioning
confidence: 99%
See 1 more Smart Citation
“…Next, (39), Jensen's inequality, a Lyapunov-type estimate (see, e.g., [15, Lemma 2.2]) combined with (35) and (38), the fact that c ≤ ρ, and Lemma 2.1 (applied with β β − 1 in the notation of Lemma 2.1) combined with (37)-( 39), (33), the assumption that κ ∈ [0, p/(3β + 1)], and the fact that ∀ h ∈ (0, T ], x ∈ D h : ϕ(x) ≤ c(b 3 h) −κ (see (45)) imply that for all t ∈ [t 0 , T ], q ∈ [1, ∞) with βq ≤ p it holds that µ(Y θ,x 0 t 0 ,t )…”
Section: Strong Error Estimates For Approximations Of Sdesmentioning
confidence: 99%
“…To the best of our knowledge the only approximation method which has been mathematically proved to overcome the curse of dimensionality for certain semilinear PDEs is the full history recursive multilevel Picard (MLP) method introduced in [20] and analyzed, e.g., in [45,48,53,46,51,6,50,27,6]. In this article we extend the analysis of MLP approximations to the case of semilinear PDEs with locally monotone coefficient functions and globally Lipschitz continuous, gradient-independent nonlinearities.…”
Section: Introductionmentioning
confidence: 99%
“…These theoretical results have justified the application of DNNs to solve high-dimensional PDEs recently in [24,25,26,27,28,29,27,30,31]. Two main advantages of deep-learning-based methods presented in these studies are summarizes as follows: firstly, the curse of dimensionality can be weakened or even be overcome in certain classes of PDEs [32,33]; secondly, deep-learning-based PDE solvers are mesh-free without tedious mesh generation for complex domains in traditional solvers. Thus, deeplearning-based methods have shown tremendous potential to surpass other numerical methods especially in solving high-dimensional PDEs in complex domains.…”
Section: Introduction 1problem Statementmentioning
confidence: 96%
“…MLP approximations have previously been shown to overcome the curse of dimensionality in the case of a number of semilinear PDE problems (cf. [7,8,11,12,46,47,54,[71][72][73][74][75]) and this is also the key ingredient in this article to overcome the curse of dimensionality in the numerical approximation of solution paths of BSDEs.…”
Section: Introductionmentioning
confidence: 99%