“…It is one of the most challenging issues in applied mathematics to approximately solve highdimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision ε > 0 grows exponentially in the PDE dimension and/or the reciprocal of ε (cf., e.g., [42, Chapter 1] and [43,Chapter 9] for related concepts and cf., e.g., [4,5,7,19,29,32,33] for numerical approximation methods for nonlinear PDEs which do not suffer from the curse of dimensionality). Recently, certain deep learning based approximation methods for PDEs have been proposed and various numerical simulations for such methods suggest (cf., e.g., [1,2,3,8,9,10,12,13,14,15,17,18,21,26,27,28,30,34,39,40,41,44,45,46,48]) that deep neural network (DNN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating DNNs grows at most polynomially in both the PDE dimension d ∈ N = {1, 2, .…”