The formulation of a black-oil or compositional fully coupled surface and subsurface simulator is described. It is based on replacing the well model in a conventional reservoir simulator with a generalized network model of the wells and facilities. This allows for representation of complex wellbore geometry and downhole equipment. The method avoids the inefficiencies and/or inaccuracies of other coupled models, in which wells and facilities are treated as separate domains or in which the global system is not solved simultaneously. Example cases demonstrate the performance of the model for cases with simple and segmented wellbores (with and without facilities).
Summary Applications are presented for a new numerical method - operator splitting on multiple grids (OSMG) - devised for simulations in heterogeneous porous media. A coarse-grid, finite-element pressure solver is interfaced with a fine-grid timestepping scheme. The CPU time for the pressure solver is greatly reduced and concentration fronts have minimal numerical dispersion. Introduction Multiphase transport in porous media is governed by a variety of complicated phenomena. Different processes are important on a range of length scales. Capillary effects are important in small scales (pore phenomena), and such macroscopic effects as front coalescence and dispersion are important at field scale. With the introduction of computer tomography and nuclear magnetic resonance techniques. to analyze core samples, one can gain insight into the geological structure of the sample on the laboratory scale; i.e., detailed 3D porosity maps for a core sample ran be obtained easily. One can naturally incorporate this information in the form of porosity and permeability data to model a displacement experiment porosity and permeability data to model a displacement experiment through the core. The question of how to scale up from the laboratory scale to the field scale is still open. However, a numerical scheme that can incorporate a description of the local heterogeneities that is as fine as possible given state-of-the-art computing represents a valuable tool in the development of scale-up schemes. The aims of this work are to develop and implement a numerical procedure that can capture the influence of local heterogeneities procedure that can capture the influence of local heterogeneities on a length scale of 1 to 2 ft for a computational domain on the order of about 1,000 ft in two dimensions. Two main concerns are involved in this development: the need to reduce the numerical dispersion introduced by a conventional coarse-grid finite-difference discretization and the reduction of the computational effort needed for the pressure solution. The reduction of the numerical dispersion allows sharp definition of the concentration or saturation fronts. Reduced dispersion allows investigation of both stable and unstable displacements, including fingering that results from viscous instabilities or channeling that results from local heterogeneities. The method uses the implicit-pressure, explicit-saturation (IMPES) procedure to decouple the pressure equation from the conservation equations numerically. A fourth-order finite-element method is used to solve the elliptic pressure problem on a coarse grid. This solution is projected to a fine grid of about 10,000 nodes in each coarse-grid element by a splines-under-tension technique, and timestepping is performed on the fine grid. After several timesteps, the current mobilities on the fine grid are passed to the coarse grid to update the pressure field. passed to the coarse grid to update the pressure field.
The formulation of a black-oil or compositional fully coupled surface and subsurface simulator is described. It is based on replacing the well model in a conventional reservoir simulator with a generalized network model of the wells and facilities. This allows for representation of complex wellbore geometry and downhole equipment. The method avoids the inefficiencies and/or inaccuracies of other coupled models, in which wells and facilities are treated as separate domains or in which the global system is not solved simultaneously. Example cases demonstrate the performance of the model for cases with simple and segmented wellbores (with and without facilities).
This paper presents the application of distributed memory parallel computers to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented. Introduction Problems of current importance in oil recovery processes involve transport of many species in heterogeneous media. An accurate numerical modeling of these processes requires a large scale simulation. Depending on the size of the reservoir and the data, there is a need to use up to millions of gridblocks in the simulation of oil reservoirs to adequately represent the complex geological and geophysical data now available from 3D seismic and other state of art sources. The biggest limitation of most compositional reservoir simulators is the large computational time as well as computing memory required to simulate very large scale reservoir problems with fine gridblocks. High performance vector computers of Cray-type can sometimes handle these problems, though at a prohibitive cost. Over the last ten years, the rapid development of distributed memory parallel computers has appeared to offer the required high performance at a moderate cost for large-scale reservoir simulation. As a consequence, there has been an increasing interest in parallel computing using the compositional reservoir simulators during the last few years due to its potential to model large and complex problems faster and more economically. An approach of data decomposition by subdomains was taken in porting the serial version of the UTCHEM simulator to a collection of distributed memory parallel processors. The targeted systems are the Intel iPSC/86O (hypercube) and Touchstone Delta, the Thinking Machines Connection Machine 5, a heterogeneous cluster of workstations using PVM, the Kendall Square 1 and 2 series, and the CRAY T3D. This paper presents results on the Intel and Thinking Machines systems only. The remaining systems will be addressed at a later time although the code has been tested and validated against the serial simulator on all systems mentioned above. A description of each of the parallel computers used in this work is given. We also compared the performance of the parallel simulator on different machines using field scale polymer and tracer flooding examples. One of the major bottlenecks in parallelizing reservoir simulators is in the treatment of linearized finite difference equations addressed by many authors. P. 89
Domain decomposition methods are a major area of contemporary research in the numerical analysis of partial differential equations. They provide robust, parallel, and scalable preconditioned iterative methods for the large linear systems arising when continuous problems are discretized by finite elements, finite differences, or spectral methods. This paper presents numerical experiments on a distributed-memory parallel computer, the 512-processor Touchstone Delta at the California Institute of Technology. An overlapping additive Schwarz method is implemented for the mixed finite-element discretization of second-order elliptic problems in three dimensions arising from flow models in reservoir simulation. These problems are characterized by large variations in the coefficients of the elliptic operator, often associated with short correlation lengths, which make the problems very ill-conditioned.The results confirm the theoretical bound on the condition number of the iteration operator and show the advantage of domain decomposition preconditioning as opposed to the simpler but less robust diagonal preconditioner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.