Currently, to build models for dynamic simulations of large oil and gas fields, engineers have to upscale the original geological grid not to exceed limits set but by computer memory size, CPU performance, and by the capabilities of standard commercial software packages. As a result, the upscaled model often has a rather low level of detalization. Since upscaling inevitably generates additional simulation errors, the resulting simulation model can solve only a limited number of practical tasks. The coarsened models are typically used for strategic tasks – to find an appropriate field development system, understand fluid and gas migration processes, and plan future production for this field. Such upscaled models are rarely used for planning of risky, high cost and practically important tactical and operational tasks of field development management and production monitoring. It's ironic that oil and gas companies invest large sums of money on detalization of the reservoir geological description, and then have to drop this information in the process of hydrodynamic simulations. Engineers often call this paradox a "simulation scale problem". Since computer hardware performance increases exponentially in time, it is the technological level of software that becomes the main limitation factor. If one could build a coherent "hardware+software" solution to resolve flow dynamics in porous media for geological grids with tens and hundreds of millions of blocks without upscaling, the problem could be solved. In this article, the technology for constructing and effective handling of giant field models by application of sector modeling and advanced parallel algorithms is discussed. The important role of modern computer hardware architecture - especially processors and RAM designs is emphasized. The authors discuss practical aspects concerning model dimensions, simulation model calculation speed for the whole field model or for any of its parts, choice of optimal model cut into sections, and boundary condition setting methods. Technology application results are demonstrated for one of the world's biggest oil fields, with a geological model size of about 43 million grid blocks. The authors show that when the whole field is divided into a certain number of sector models, the sum of their calculation times may be substantially smaller than the full model calculation time. At the same time, if boundary conditions are included in simulations of subdomains, the spread in values of calculated production rates can be as small as 1%. The approach described in this paper appears to be efficient for history matching of large hydrodynamic models. It helps to reduce the time to completion for the project, and avoid the unnecessary modeling precision degradation caused by grid upscaling.
In order to create an efficient system for modeling giant reservoir models, there is a whole range of technical challenges which have to be addressed. In this paper we concentrate on the topic of parallel scalability of complex computing systems like multi-CPU clusters and workstations with GPU processing cards. For multi-CPU distributed memory computing systems, it is shown that about 10 times improvement in parallel performance can be achieved if a new so called "hybrid" approach is used. In this "hybrid" approach, the usual MPI synchronization between the cluster nodes is being interleaved with a shared memory system thread based synchronization at the node level. It is demonstrated, that for some "black oil" models of real oil and gas fields, the parallel acceleration factor can exceed 1300 times for 4096 CPU cores. Even for the extreme example of a giant full field model containing over 14,000 production and injection wells, it is shown that a parallel acceleration of over 350 times can be achieved. For CPU-GPU, and CPU-CPU based systems, we compare the parallel performance of simple iterative and realistic pre-conditioner based algorithms typically used in oil and gas simulations. Hardware systems equipped with AMD FirePro, nVidia TESLA and 16-core dual Intel Xeon E2580 systems are compared in this study.
Effective field development planning for giant fields as big as Samotlor in Western Siberia faces a number of challenges. Samotlor field has almost 50 years of production history, and is known for its complex multi-reservoir geology, active gas cap, sophisticated waterflood program of giant proportions, and has more than 14 thousand wells. The field of such complexity and magnitude cannot be developed without a full-field hydrodynamic model covering the entire reservoir.Historically, due to extreme complexity, only highly upscaled models have been used. These models were primarily used for calculation of the remaining reservoir energy, material balance analysis, gas cap migration, but never had enough spatial resolution to allow for simulations at the well level.To make plans for infill drilling and various workovers, the most production units were relying on sector models, which did have necessary spatial resolution, but had obvious limitations in the areal coverage of the field. As a result, due to the differences in grid resolutions, the results obtained from history matching of sectors were rarely used for full-field model causing frequent inefficiencies and redundancies in simulation workflows.In a course of planning work to develop the remaining oil reserves of Samatlor field, a new "unified model" concept has been developed. In this concept, one and the same model is being used both for global optimization of full-field model as well as for making decisions at the level of individual wells. To determine optimal grid resolution, models with 5, 40, 160 and more than 400 million active grid blocks were selected, and simulation results compared. For the first time, a new hydrodynamic model based on un-upscaled geological grid of AV1-5 reservoir group of Samotlor field was simulated. By taking into account the number of wells and the length of the production history, this is probably the most complex simulation of reservoir dynamics ever attempted in the oil&gas industry.
This paper presents a novel approach to numerical simulations of hydraulic fractures in dynamic reservoir simulations. The fluid flow in fractures is modeled through a network of virtual perforations created in the model grid blocks intersected by the expected fracture trajectory and directly connected to the fractured well. It is demonstrated that by adjusting the productivity indexes of fracture virtual perforations on each time step, practically any static and dynamic behavior of the fracture and its proppant can be realistically modeled. The crossflow between real and virtual perforations is managed by solving a joint well equation. The algorithm takes into account effects of fracture permeability degradation due to pressure, proppant destruction, or total accumulated liquid flux. When the fracture half-length is greater than the average grid block size, the approach described in this paper provides much more realistic simulations than the conventional skin-factor approach or manual generation of high permeability channels. One of the most important advantages of the proposed method is that it can be robustly used for large full-field models with hundreds of horizontal or vertical wells with large scale hydraulic fracturing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.