We describe a multistage parallel linear solver framework developed as part of the Intersect (IX) next-generation reservoir simulation project. The object-oriented framework allows for wide flexibility in the number of stages, methods and preconditioners. Here, we describe the specific components of a two-stage CPR[1] (Constraint Pressure Residual) scheme designed for large-scale parallel, structured and unstructured linear systems. We developed a highly efficient in-house Parallel Algebraic Multigrid (PAMG) solver as the first stage preconditioner. For the second stage, we use a parallel ILU-type scheme. This new and powerful combination of CPR and PAMG was the result of detailed analysis of the linear system of equations associated with reservoir simulation. Using several difficult reservoir simulation problems, we demonstrate the robustness and excellent parallel scalability of the IX linear solver. For the field case studies, the IX linear solver with CPR and PAMG is at least five times faster than an established and widely used industrial linear solver. The performance advantage of the IX linear solver over traditional reservoir simulation linear solvers increases with both problem size and the number of processors. Introduction Different types of grid may be used for reservoir flow simulation to model geometrically complex, highly detailed models and/or deviated or multi-lateral wells[2]. Grid types are often labeled based on their structure. Examples of simulation grids include:structured Cartesian,structured stratigraphic,multi-block stratigraphic,PEBI (Perpendicular Bisector), andgenerally unstructured. Hybrid grids that combine various types can also be used. It is now widely recognized that complete flexibility in representing complex and highly detailed simulation models can be achieved using generally unstructured grids[3].In recent years, significant efforts have focused on building multi-purpose reservoir flow simulators that can deal with geometrically complex and highly detailed structured and unstructured reservoir models[4,5,6]. These relatively large-scale efforts are being pursued because, for nearly three decades, the reservoir simulation community has focused on building robust and efficient reservoir simulators for structured grid problems. Today, the ability to routinely simulate a wide spectrum of practical black-oil problems on (effectively) structured models with O(105) gridblocks is widespread. However, the performance of traditional reservoir simulators typically deteriorates significantly with problem size and the number of processors. This is because the algorithms and software implementations were not designed for scalable, parallel computation. A scalable algorithm is one whose computational complexity (i.e., the number of operations to reach solution) is proportional to the number of unknowns; moreover, the algorithm should also have a convergence rate that is independent of problem size or the number of processors. In numerical solution algorithms, there is often a tradeoff between convergence rate and degree of parallelism. As a result, to obtain a useful measure of parallel efficiency, the best scalar (uniprocessor) algorithm should be used as reference. Scalable methods are needed because the size of problems of interest continues to grow quite significantly, and we want to avoid methods with computational complexities of O(Na) with an athat is (much) larger than unity.
TX 75083-3836, U.S.A., fax 01-972-952-9435. AbstractWe describe a multistage parallel linear solver framework developed as part of the Intersect (IX) next-generation reservoir simulation project. The object-oriented framework allows for wide flexibility in the number of stages, methods and preconditioners. Here, we describe the specific components of a two-stage CPR 1 (Constraint Pressure Residual) scheme designed for large-scale parallel, structured and unstructured linear systems. We developed a highly efficient in-house Parallel Algebraic Multigrid (PAMG) solver as the first stage preconditioner. For the second stage, we use a parallel ILU-type scheme. This new and powerful combination of CPR and PAMG was the result of detailed analysis of the linear system of equations associated with reservoir simulation.Using several difficult reservoir simulation problems, we demonstrate the robustness and excellent parallel scalability of the IX linear solver. For the field case studies, the IX linear solver with CPR and PAMG is at least five times faster than an established and widely used industrial linear solver. The performance advantage of the IX linear solver over traditional reservoir simulation linear solvers increases with both problem size and the number of processors.
This paper was selected for presentation by an SPE Program Committee following review of information contained in a proposal submitted by the author(s). Contents of the paper, as presented, have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Papers presented at SPE meetings are subject to publication review by Editorial Committees of the Society of Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper for commercial purposes without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to a proposal of not more than 300 words; illustrations may not be copied. The proposal must contain conspicuous acknowledgment of where and by whom the paper was presented. Write Librarian, SPE,
This paper describes the algorithms and implementation of a parallel reservoir simulator designed for, but not limited to, distributed-memory computational platforms that can solve previously prohibitive problems efficiently. The parallel simulator inherits the multipurpose features of the in-house sequential simulator, which is at the core of the new capability. As a result, black-oil, miscible, compositional, and thermal problems can be solved efficiently using this new simulator. A multilevel domain decomposition approach is used. First, the original reservoir is decomposed into several domains, each of which is given to a separate processing node. All nodes then execute computations in parallel, each node on its associated subdomain. The parallel computations include initialization, coefficient generation, linear solution on the sub-domain, and input/output. To enhance the convergence rate, we solve a coarse global problem which is generated via a multigrid-like coarsening procedure. This solution serves as a preconditioner of an outer parallel GMRES loop. The exchange of information across the subdomains, or processors, is achieved using the message passing interface standard, MPI. The use of MPI ensures portability across different computing platforms ranging from massively parallel machines to clusters of workstations. Results indicate the simulator exhibits excellent scalability of the simulator for up to 32 processors on the IBM SP2 system. Scalability results are also presented for a cluster of IBM workstations connected via an ATM (Asynchronous Transfer Mode) communication. The use of ATM for interprocessor communication was found to have a small, but measurable, impact on scaling performance. Introduction The predictive capacity of a reservoir simulator depends first on the quality of the information used, and then it depends on the ability of the computational grid and solution method to describe the flow behavior accurately. The injection of more detail into reservoir description is producing very large models. Scale up technology can be applied to reduce the overall size of the models while preserving the important details of the flow. For large-scale reservoir displacements, the scaled up model itself could consist of millions of gridblocks Flow simulation using models of that size is beyond the current capability of uniprocessor, or even shared-memory multiprocessor, compute platforms. In this work, we describe the development of a parallel multi-purpose reservoir simulator that can solve previously prohibitive problems efficiently. In addition, the parallel simulator provides the means to validate and help improve the scaled-up model by comparing its flow predictions with detailed simulations using the original finer scale description from which it is derived. In the following sections, we give a brief overview of the parallel computing landscape and we adopt a definition for scalability. That is followed by a description of our parallel simulation development strategy and implementation details. Performance results for the parallel simulator are then presented and analyzed. We close with key conclusions. Background The development of application codes for distributed-memory parallel platforms has been, until recently, a high-risk investment, both in terms of capital and manpower. This high-risk environment was due, in large part, to (1) an unstable landscape of parallel computing vendors/machines and (2) a lack of software portability across the various platforms. The focus had been on massively-parallel machines with proprietary architectures that link hundreds, or even thousands, of specially designed processors. Because the processing nodes in these machines tended to have limited computing power and small local memory, massive parallelism, both in terms of total memory and compute power, was achieved by employing thousands of such processors. P. 17^
A general purpose reservoir simulator was developed as a vehicle to investigate vectorization and parallel processing on the Cray XMP/48. The simulator can model black oil, compositional and steam injection processes. It includes both fully implicit and IMPES formulations. Microtasking is used for parallel processing in order to maintain the portability of the code. The simulator calculations other than linear equation solution were structured to facilitate vectorization and parallel processing. Several issues concerning parallel processing are discussed. These include task granularity, load balance, synchronization, memory contention, and balance between vectorization and arallel processing. Based on three test problems, the speedup of the calculations due to vectorization ranges from 5.3 to 10. The speedup can be augmented by as much as 3.3 times through parallel processing. The scheme proposed here is general enough to be applicable to other vector and parallel processing computers. Introduction Two factors which are important in simulator utilization are speed and maintainability. Vectorization and parallel processing can significantly improve speed. A general purpose reservoir simulator can improve maintainability by elimimating the need to duplicate routines which are common to many simulators. Although much attention has been paid to applying vectorization in reservoir simulation, little work was done in parallel processing. This work is intended to test the feasibility of using both vectorization and parallel processing in a general purpose reservoir simulator. Although the simulator was developed for a Cray multiprocessor machine, the methodology used in the development is applicable to other multiple and vector processors. This paper is divided into four parts. The first part describes the mathematical formulation and numerical solution procedure for the general purpose simulator. The second part discusses the fundamentals of vectorization and parallel processing. The third part shows how vectorization and parallel processing is applied to the simulator. Finally, the fourth part describes the performance tests and discusses the results of these tests. MATHEMATICAL MODEL AND NUMERICAL SOLUTION PROCEDURE Mathematical Model The mathematical model is formulated to simulate black oil, compositional, and thermal recovery processes. It includes both fully implcit and IMPES formulation. A compositional approach is used to model black oil processes. Darcy's law and instantaneous phase equilibrium are assumed valid. In order to simulate different processes using a single code, mutual solubilities of the components in the mixture have to be distinguished. Mutual solubilities of water and hydrocarbon components are considered unimportant in black oil or compositional processes, but are important for steam injection processes. P. 329^
An investigation is presented on the use of Flow Control Valves (ICVs, FCVs) to control steam placement in the early stages of a Steam Assisted Gravity Drainage (SAGD) process. The two parts of this process that are examined in this paper are the steam circulation preheating period and the early stages up to one year of injection/production in which the steam chamber is beginning to form. Steam injection and production in this and other thermal processes can be difficult to control because steam has a high mobility ratio and tends to establish flow paths that may be difficult to break once established. This is especially pronounced in heterogeneous reservoirs. Two SAGD case studies have been designed that accurately model the initial preheating period in which both wells circulate steam through an inner tubing and outer annulus in order to conductively and, to a lesser extent convectively, heat the region around the well pair in order to establish communication. After this initial circulation period, the wells switch to injection and production. Both cases have the same base configuration but differ in the degree of reservoir heterogeneity. In the injection well, ICV devices are placed to control steam/water flow through the outer screens. In the producer, FCV valves are used to flatten the production profile along the well. Two methods are examined to change valve apertures. One uses proportional-integral-derivative (PID) controllers while the second applies an optimization algorithm directly on each individual connection productivity index. A preliminary investigation is presented here into using feedback controllers and optimization with instantaneous reservoir parameters to improve a SAGD process in the presence of reservoir heterogeneity.
This paper describes the formulation of a thermal simulation model in a vectorized, general purpose reservoir simulator. The thermal simulation model, an option in this general purpose simulator, is equivalent to a three-phase, three-dimensional, thermal compositional simulator. It has a variety of options for modeling fluid properties and phase behavior. These include the EOS method (the three parameter Peng-Robinson equation of state) and the non-EOS methods (table look-ups and correlations). The model also considers the solubility of water in the oil phase. Furthermore, all calculations are extensively vectorized.Test problems including problem No. 3 of the Fourth SPE Comparative Solution Project are used to evaluate the effectiveness of vectorization. Results show that the vectorized calculations outperform the nonvectorized ones by about an order of magnitude in computational speed on the Cray X-MP /48. Overall, this thermal model runs about 5 times faster than a previously reported conventional, partially vectorized thermal simulator.Finally, the use of this thermal simulation option to study the sensitivity of simulation results to various fluid property and phase behavior calculation methods is illustrated with a hypothetical problem. Results show that simulation results can be used to determine the importance of different Kvalue and volume calculation methods, and the inclusion of water-in-oil solubility.References and illustrations at end of paper.123
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.