Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the "curse of dimensionality." This paper introduces a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and cost. This opens up possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their interrelationships.
On the one hand, the explicit Euler scheme fails to converge strongly to the exact solution of a stochastic differential equation (SDE) with a superlinearly growing and globally one-sided Lipschitz continuous drift coefficient. On the other hand, the implicit Euler scheme is known to converge strongly to the exact solution of such an SDE. Implementations of the implicit Euler scheme, however, require additional computational effort. In this article we therefore propose an explicit and easily implementable numerical method for such an SDE and show that this method converges strongly with the standard order one-half to the exact solution of the SDE. Simulations reveal that this explicit strongly convergent numerical scheme is considerably faster than the implicit Euler scheme.
The stochastic Euler scheme is known to converge to the exact solution of a stochastic differential equation (SDE) with globally Lipschitz continuous drift and diffusion coefficients. Recent results extend this convergence to coefficients that grow, at most, linearly. For superlinearly growing coefficients, finite-time convergence in the strong mean-square sense remains. In this article, we answer this question to the negative and prove, for a large class of SDEs with non-globally Lipschitz continuous coefficients, that Euler's approximation converges neither in the strong mean-square sense nor in the numerically weak sense to the exact solution at a finite time point. Even worse, the difference of the exact solution and of the numerical approximation at a finite time point diverges to infinity in the strong mean-square sense and in the numerically weak sense.
High-dimensional partial differential equations (PDE) appear in a number of models from the financial industry, such as in derivative pricing models, credit valuation adjustment (CVA) models, or portfolio optimization models. The PDEs in such applications are high-dimensional as the dimension corresponds to the number of financial assets in a portfolio. Moreover, such PDEs are often fully nonlinear due to the need to incorporate certain nonlinear phenomena in the model such as default risks, transaction costs, volatility uncertainty (Knightian uncertainty), or trading constraints in the model. Such high-dimensional fully nonlinear PDEs are exceedingly difficult to solve as the computational effort for standard approximation methods grows exponentially with the dimension. In this work we propose a new method for solving high-dimensional fully nonlinear second-order PDEs. Our method can in particular be used to sample from high-dimensional nonlinear expectations. The method is based on (i) a connection between fully nonlinear second-order PDEs and second-order backward stochastic differential equations (2BSDEs), (ii) a merged formulation of the PDE and the 2BSDE problem, (iii) a temporal forward discretization of the 2BSDE and a spatial approximation via deep neural nets, and (iv) a stochastic gradient descent-type optimization procedure. Numerical results obtained using TensorFlow in Python illustrate the efficiency and the accuracy of the method in the cases of a 100-dimensional Black-Scholes-Barenblatt equation, 1 arXiv:1709.05963v1 [math.NA] 18 Sep 2017 a 100-dimensional Hamilton-Jacobi-Bellman equation, and a nonlinear expectation of a 100-dimensional G-Brownian motion.
Many stochastic differential equations (SDEs) in the literature have a superlinearly growing nonlinearity in their drift or diffusion coefficient. Unfortunately, moments of the computationally efficient Euler-Maruyama approximation method diverge for these SDEs in finite time. This article develops a general theory based on rare events for studying integrability properties such as moment bounds for discretetime stochastic processes. Using this approach, we establish moment bounds for fully and partially drift-implicit Euler methods and for a class of new explicit approximation methods which require only a few more arithmetical operations than the Euler-Maruyama method. These moment bounds are then used to prove strong convergence of the proposed schemes. Finally, we illustrate our results for several SDEs from finance, physics, biology and chemistry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.