2020
DOI: 10.1515/jnma-2019-0074
|View full text |Cite
|
Sign up to set email alerts
|

Overcoming the curse of dimensionality in the numerical approximation of Allen–Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations

Abstract: AbstractOne of the most challenging problems in applied mathematics is the approximate solution of nonlinear partial differential equations (PDEs) in high dimensions. Standard deterministic approximation methods like finite differences or finite elements suffer from the curse of dimensionality in the sense that the computational effort grows exponentially in the dimension. In this work we overcome this difficulty in the case of reaction-diffusion type PDEs with a locally Lipsch… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
50
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 30 publications
(50 citation statements)
references
References 29 publications
0
50
0
Order By: Relevance
“…MLP approximation methods are, roughly speaking, based on the idea to (IIa) reformulate the PDE under consideration as a stochastic fixed point problem with the PDE solution being the fixed point of the stochastic fixed point equation, to (IIb) approximate the fixed point through Banach fixed point iterates (which are also referred to as Picard iterates in the context of integral fixed point equations), and to (IIc) approximate the resulting Banach fixed point iterates through suitable full-history recursive multilevel Monte Carlo approximations. In the case of MLP approximation methods there are both encouraging numerical simulation results (see [8,19]) and rigorous mathematical results which prove that MLP approximation methods do indeed overcome the curse of dimensionality in the numerical approximation of nonlinear second-order PDEs (see [5,6,18,24,35,[37][38][39]). However, in each of the convergence results for MLP approximation methods in the scientific literature it is assumed that the coefficient functions in front of the second-order differential operator are affine linear.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…MLP approximation methods are, roughly speaking, based on the idea to (IIa) reformulate the PDE under consideration as a stochastic fixed point problem with the PDE solution being the fixed point of the stochastic fixed point equation, to (IIb) approximate the fixed point through Banach fixed point iterates (which are also referred to as Picard iterates in the context of integral fixed point equations), and to (IIc) approximate the resulting Banach fixed point iterates through suitable full-history recursive multilevel Monte Carlo approximations. In the case of MLP approximation methods there are both encouraging numerical simulation results (see [8,19]) and rigorous mathematical results which prove that MLP approximation methods do indeed overcome the curse of dimensionality in the numerical approximation of nonlinear second-order PDEs (see [5,6,18,24,35,[37][38][39]). However, in each of the convergence results for MLP approximation methods in the scientific literature it is assumed that the coefficient functions in front of the second-order differential operator are affine linear.…”
Section: Introductionmentioning
confidence: 99%
“…Especially, there are two types of approximation methods which have turned out to be quite successful in the numerical approximation of solutions of high-dimensional nonlinear secondorder PDEs, namely, (I) deep learning based approximation methods for PDEs (cf., e.g., [1-3, 9-11, 13, 14, 16, 17, 20, 22, 23, 25, 28-32, 41, 43, 46-52, 55, 56]) and (II) full-history recursive multilevel Picard approximation methods for PDEs (cf. [6,8,18,19,24,35,[37][38][39]; in the following we abbreviate full-history recursive multilevel Picard as MLP). Deep learning based approximation methods for PDEs are, roughly speaking, based on the idea to (Ia) approximate the PDE problem under consideration through a stochastic optimization problem involving deep neural networks as approximations for the solution or the derivatives of the solution of the PDE under consideration and to (Ib) apply stochastic gradient descent methods to approximately solve the resulting stochastic optimization problem.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(I) Deep learning based approximation methods for PDEs; cf., e.g., [3-5, 9-12, 15-17, 19, 20, 24, 26, 28, 31, 36-40, 46, 48, 49, 52-63, 65, 66] (II) Full history recursive multilevel Picard approximation methods for PDEs; cf., e.g., [6,7,22,23,29,41,[43][44][45] (in the following we abbreviate full history recursive multilevel Picard by MLP)…”
Section: Introductionmentioning
confidence: 99%
“…SFPEs of the form as in (2) have a strong connection with semilinear Kolmogorov PDEs and arise, for example, in models from the environmental sciences as well as in pricing problems from financial engineering (cf., for example, Burgard & Kjaer [4], Crépey et al [5], Duffie et al [6], and Henry-Labordère [10]). SFPEs such as (2) are also important for full-history recursive multilevel Picard approximation (MLP) methods, which were recently introduced in [11,13]; see also [1,12,14,15]. In [13,14] it has been shown that functions which satisfy SFPEs related to semilinear Kolmogorov PDEs can be approximated by MLP schemes without the curse of dimensionality.…”
mentioning
confidence: 99%