On the one hand, the explicit Euler scheme fails to converge strongly to the exact solution of a stochastic differential equation (SDE) with a superlinearly growing and globally one-sided Lipschitz continuous drift coefficient. On the other hand, the implicit Euler scheme is known to converge strongly to the exact solution of such an SDE. Implementations of the implicit Euler scheme, however, require additional computational effort. In this article we therefore propose an explicit and easily implementable numerical method for such an SDE and show that this method converges strongly with the standard order one-half to the exact solution of the SDE. Simulations reveal that this explicit strongly convergent numerical scheme is considerably faster than the implicit Euler scheme.
The stochastic Euler scheme is known to converge to the exact solution of a stochastic differential equation (SDE) with globally Lipschitz continuous drift and diffusion coefficients. Recent results extend this convergence to coefficients that grow, at most, linearly. For superlinearly growing coefficients, finite-time convergence in the strong mean-square sense remains. In this article, we answer this question to the negative and prove, for a large class of SDEs with non-globally Lipschitz continuous coefficients, that Euler's approximation converges neither in the strong mean-square sense nor in the numerically weak sense to the exact solution at a finite time point. Even worse, the difference of the exact solution and of the numerical approximation at a finite time point diverges to infinity in the strong mean-square sense and in the numerically weak sense.
Many stochastic differential equations (SDEs) in the literature have a superlinearly growing nonlinearity in their drift or diffusion coefficient. Unfortunately, moments of the computationally efficient Euler-Maruyama approximation method diverge for these SDEs in finite time. This article develops a general theory based on rare events for studying integrability properties such as moment bounds for discretetime stochastic processes. Using this approach, we establish moment bounds for fully and partially drift-implicit Euler methods and for a class of new explicit approximation methods which require only a few more arithmetical operations than the Euler-Maruyama method. These moment bounds are then used to prove strong convergence of the proposed schemes. Finally, we illustrate our results for several SDEs from finance, physics, biology and chemistry.
The celebrated Hörmander condition is a sufficient (and nearly necessary) condition for a second-order linear Kolmogorov partial differential equation (PDE) with smooth coefficients to be hypoelliptic. As a consequence, the solutions of Kolmogorov PDEs are smooth at all positive times if the coefficients of the PDE are smooth and satisfy Hörmander's condition even if the initial function is only continuous but not differentiable. First-order linear Kolmogorov PDEs with smooth coefficients do not have this smoothing effect but at least preserve regularity in the sense that solutions are smooth if their initial functions are smooth. In this article, we consider the intermediate regime of nonhypoelliptic second-order Kolmogorov PDEs with smooth coefficients. The main observation of this article is that there exist counterexamples to regularity preservation in that case. More precisely, we give an example of a second-order linear Kolmogorov PDE with globally bounded and smooth coefficients and a smooth initial function with compact support such that the unique globally bounded viscosity solution of the PDE is not even locally Hölder continuous. From the perspective of probability theory, the existence of this example PDE has the consequence that there exists a stochastic differential equation (SDE) with globally bounded and smooth coefficients and a smooth function with compact support which is mapped by the corresponding transition semigroup to a function which is not locally Hölder continuous. In other words, degenerate noise can have a roughening effect. A further implication of this loss of regularity Received October 2012; revised January 2013. 1 Supported by EPSRC, the Royal Society and by the Leverhulme Trust. 2 Supported by the research project "Numerical approximation of stochastic differential equations with non-globally Lipschitz continuous coeffcients".3 Supported by the research project "Numerical solutions of stochastic differential equations with non-globally Lipschitz continuous coeffcients".AMS 2000 subject classifications. 35B65. Key words and phrases. Kolmogorov equation, loss of regularity, roughening effect, smoothing effect, hypoellipticity, Hörmander condition, viscosity solution, degenerate noise, nonglobally Lipschitz continuous. This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Probability, 2015, Vol. 43, No. 2, 468-527. This reprint differs from the original in pagination and typographic detail. 1 2 M. HAIRER, M. HUTZENTHALER AND A. JENTZEN phenomenon is that numerical approximations may converge without any arbitrarily small polynomial rate of convergence to the true solution of the SDE. More precisely, we prove for an example SDE with globally bounded and smooth coefficients that the standard Euler approximations converge to the exact solution of the SDE in the strong and numerically weak sense, but at a rate that is slower then any power law.
Deep neural networks and other deep learning methods have very successfully been applied to the numerical approximation of high-dimensional nonlinear parabolic partial differential equations (PDEs), which are widely used in finance, engineering, and natural sciences. In particular, simulations indicate that algorithms based on deep learning overcome the curse of dimensionality in the numerical approximation of solutions of semilinear PDEs. For certain linear PDEs this has also been proved mathematically. The key contribution of this article is to rigorously prove this for the first time for a class of nonlinear PDEs. More precisely, we prove in the case of semilinear heat equations with gradientindependent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. Our proof relies on recently introduced full history recursive multilevel Picard approximations of semilinear PDEs.
Chapter 1. Introduction 1.1. Notation Chapter 2. Strong stability analysis for solutions of SDEs 2.1. Setting 2.2. Exponential integrability bounds for solutions of SDEs 2.3. An identity for Lyapunov-type functions 2.4. Two solution approach Chapter 3. Strong completeness of SDEs 3.1. Theorems of Yamada-Watanabe and of Kolmogorov-Chentsov type 3.2. Proofs of the strong completeness results 3.3. Strong completeness for SDEs with additive noise Chapter 4. Examples of SODEs 4.1. Setting 4.2. Stochastic van der Pol oscillator 4.3. Stochastic Duffing-van der Pol oscillator 4.4. Stochastic Lorenz equation with additive noise 4.5. Langevin dynamics 4.6. Brownian dynamics (Over-damped Langevin dynamics) 4.7. Stochastic SIR model 4.8. Experimental psychology model 4.9. Stochastic Brusselator in the well-stirred case 4.10. Stochastic volatility processes and interest rate models (CIR, Ait-Sahalia, 3/2-model, CEV) 4.11. Wright-Fisher diffusion Chapter 5. Examples of SPDEs 5.1. Setting 5.2. Stochastic Burgers equation with a globally bounded diffusion coefficient and trace class noise 5.3. Cahn-Hilliard Cook equation with trace class noise 5.4. Non-linear wave equation Acknowledgments Bibliography
For a long time it has been well-known that high-dimensional linear parabolic partial differential equations (PDEs) can be approximated by Monte Carlo methods with a computational effort which grows polynomially both in the dimension and in the reciprocal of the prescribed accuracy. In other words, linear PDEs do not suffer from the curse of dimensionality. For general semilinear PDEs with Lipschitz coefficients, however, it remained an open question whether these suffer from the curse of dimensionality. In this paper we partially solve this open problem. More precisely, we prove in the case of semilinear heat equations with gradient-independent and globally Lipschitz continuous nonlinearities that the computational effort of a variant of the recently introduced multilevel Picard approximations grows at most polynomially both in the dimension and in the reciprocal of the required accuracy.
The Euler-Maruyama scheme is known to diverge strongly and numerically weakly when applied to nonlinear stochastic differential equations (SDEs) with superlinearly growing and globally one-sided Lipschitz continuous drift coefficients. Classical Monte Carlo simulations do, however, not suffer from this divergence behavior of Euler's method because this divergence behavior happens on rare events. Indeed, for such nonlinear SDEs the classical Monte Carlo Euler method has been shown to converge by exploiting that the Euler approximations diverge only on events whose probabilities decay to zero very rapidly. Significantly more efficient than the classical Monte Carlo Euler method is the recently introduced multilevel Monte Carlo Euler method. The main observation of this article is that this multilevel Monte Carlo Euler method does-in contrast to classical Monte Carlo methods-not converge in general in the case of such nonlinear SDEs. More precisely, we establish divergence of the multilevel Monte Carlo Euler method for a family of SDEs with superlinearly growing and globally one-sided Lipschitz continuous drift coefficients. In particular, the multilevel Monte Carlo Euler method diverges for these nonlinear SDEs on an event that is not at all rare but has probability one. As a consequence for applications, we recommend not to use the multilevel Monte Carlo Euler method for SDEs with superlinearly growing nonlinearities. Instead we propose to combine the multilevel Monte Carlo method with a slightly modified Euler method. More
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.