We present efficient deep learning techniques for approximating flow and transport equations for both single phase and two-phase flow problems. The proposed methods take advantages of the sparsity structures in the underlying discrete systems and can be served as efficient alternatives to the system solvers at the full order. In particular, for the flow problem, we design a network with convolutional and locally connected layers to perform model reductions. Moreover, we employ a custom loss function to impose local mass conservation constraints. This helps to preserve the physical property of velocity solution which we are interested in learning. For the saturation problem, we propose a residual type of network to approximate the dynamics. Our main contribution here is the design of custom sparsely connected layers which take into account the inherent sparse interaction between the input and output. After training, the approximated feed-forward map can be applied iteratively to predict solutions in the long range. Our trained networks, especially in two-phase flow where the maps are nonlinear, show their great potential in accurately approximating the underlying physical system and improvement in computational efficiency. Some numerical experiments are performed and discussed to demonstrate the performance of our proposed techniques.
IntroductionVarious physical phenomena in engineering applications are described by flow and transport problem in porous media, including reservoir engineering, climate dynamics, material science and so on. The underlying problems naturally exhibit heterogeneities covering from large physical scales down to micro-scales. Numerical simulations for these problems are challenging due to a rich class of length scales and potential uncertainties. In the past decades, numerous model reduction techniques and multiscale methods are proposed to design alternative models with great computational efficiency as well as desired accuracy. Local model reduction techniques typically involve building representations of underlying heterogeneity using some basis functions or effective coarse grid properties, and then constructing a form of the coarse level ]. For large scale dynamical system, global reduced order models adopting Proper Orthogonal Decomposition method, Krylov subspace projection method, etc, are proposed to approximate the state-space problems. Though both local and global model reduction techniques have been extensively applied in many problems, reduced order models may have complicated forms in the linear case, not mentioning in nonlinear settings [25,4,8,46].Recently, deep learning has attracted growing attention in a rich class of applications. It has gained revolutionary results in image, speech, and text recognition [35,31,29]. The potential of deep neural networks lies in their great capacity in approximating high-dimensional nonlinear maps. There are extensive efforts devoted to learning the expressivity of deep neural nets theoretically, just to mention a few [21,32,20,42,38], ...