Oil-water relative permeability and capillary pressure are key inputs for multiphase reservoir simulations. These data are significantly impacted by the wettability state in the reservoir and by the pore space characteristics of the rock. However, in the laboratory, there are several challenges related to the validation and interpretation of the special core analysis (SCAL) measurements. They are mostly associated with the core preservation or restoration processes and resulting wettability states. To improve dynamic reservoir rock typing (DRRT) process, a new model, describing the change of wettability fraction with depth in mixed-wet reservoirs, is proposed. The proposed model is based on solid physics describing the interactions between the rock grain surfaces and the fluids filling the pore space. First, the model considers the oil migration from the source rock into the originally water-wet reservoir and the corresponding capillary pressure rise, as the height above the free water level (HAFWL) is progressively increased. Then, oil-wet and water-wet fractions are estimated for different static reservoir rock types (SRRT) and different HAFWL, based on the wettability change potential of the rock-fluid system and oil-water capillary pressure curves. Additionally, mixed-wet capillary pressure and relative permeability curves are estimated for both oil displacing water (drainage) and water displacing oil (imbibition) processes, based on the estimated mixed-wet fractions and single-wet curves. We discussed the model assumptions and its parameters’ uncertainties. We prepared a comprehensive sensitivity study on the impact of wettability variability with depth on oil recovery results. This study used a synthetic carbonate-reservoir simulation model, under waterflooding, by incorporating the concept of DRRT defined according to the different SRRT and estimated wettability fractions. The results showed a significant impact of wettability variability on oil in place and reserves estimates for waterflooding processes in typical complex, mixed-wet carbonate reservoirs, such as the ones found in the Brazilian Pre-Salt. We also discuss the potential impact of wettability change with depth on well logs like resistivity, nuclear magnetic resonance (NMR) and dielectric logs. The proposed reservoir wettability model and its corresponding DRRT workflow is relatively simple and widely applicable, and may significantly improve reservoir simulation and wettability uncertainty analysis. It also explicitly identifies the required wettability parameters to be obtained from laboratory experiments and well logs. Finally, the proposed model may be integrated with special core analysis, well logs and digital-rock analysis.
Commercial reservoir simulators have traditionally been optimized for distributed parallel execution on Central Processing Units (CPUs). Recent advances in Graphics Processing Units (GPUs) have led to the development of GPU-native simulators and triggered a shift towards a hardware-agnostic design in existing CPU solutions. For the latter, the suite of algorithms and data structures employed for a given computation are implemented for each target device. This results in a hybrid approach, where some simulator components inherently expose enough instruction parallelism or memory bandwidth requirements to warrant running on the GPU, while others are more suitable for the CPU. This paper examines the performance characteristics of a commercial black-oil reservoir simulator, which was recently extended with GPU support. Each simulation case will distribute load on the various modules in a reservoir simulator differently, depending on the target physical properties and the forecasted data desired. To assess this, the scalability of the simulator is measured in detail using the CPU and GPU, for components where both implementations are available, focusing on time spent during model initialization, property calculation, linearization, solver, field management and reporting. This is done using test cases which stress the simulator across several axes: grid resolution, different petrophysical property distributions, well count and the volume of reported data. The synthetic models which form the basis for these studies were designed to represent realistic reservoir engineering scenarios. The results show that a static partition between CPU- and GPU-assigned tasks, as employed by default in the simulator, is performant for scenarios where the work dedicated to grid cell properties and linear solution vastly outnumbers the effort spent resolving well or aquifer connections, field management and reporting. This is expected for typical simulation cases. However, when one of the latter aspects becomes dominant, the balance can shift, leading to suboptimal hardware utilization. In conclusion, if performance across all possible inputs is to be maintained, then a fully-CPU-and-GPU-capable simulator is needed, employing a dynamic scheduling strategy, where the runtime data locality, volume and parallelism of the corresponding computations are all considered when determining the target device for each operation. To the authors’ knowledge, a study on the scalability of a commercial reservoir simulator, across two different hardware architectures, has not previously been conducted to this level of detail. The results on realistic models are presented in the hope that they will contribute to the discussion surrounding the benefits of modern computing hardware for reservoir simulation and help drive deployment and design decisions for existing and future developments in both the commercial and academic spheres.
Commercial reservoir simulators have traditionally been optimized for parallel computations on central processing units (CPUs). The recent advances in general-purpose graphics processing units (GPUs) have provided a powerful alternative to CPU, presenting an opportunity to significantly reduce run times for simulations. Realizing peak performance on GPU requires that GPU-specific code be written, and also requires that data are laid out sympathetically to the hardware. The cost of copying data between the CPU memory and GPU memory at the time of this writing is egregious. Peak performance will only be realized if this is minimized. In paper Cao et al., 2021, the authors establish approaches to enable a simulator to give excellent performance on a CPU or GPU, with the same simulation result using either hardware. We discuss how their prototype was generalized into high-quality, maintainable code with applicability across a wide range of models. Different parts of a reservoir simulator benefit from different approaches. A modern, object-oriented simulator requires components to handle initialization, property calculation, linearization, linear solver, well and aquifer calculations, field management, and reporting. Each of these areas will present architectural challenges when broadening the scope of the simulator from CPU only to supporting CPU or GPU. We outline these challenges and present the approaches taken to address them. In particular, we discuss the importance of abstracting compute scheduling, testing methods, data storage classes, and associated memory management to a generic framework layer. We have created a high-quality reservoir simulator with the capacity to run on a CPU or GPU with results that match to within a very small tolerance. We present software engineering approaches that enable the team to achieve and maintain this in the future. In addition, we present test outcomes and discuss how to achieve excellent performance. To our knowledge, no simulator capable of both CPU simulation and full GPU simulation (meaning simulation with no copies of full grid-size data for purposes other than reporting) has been presented. We will present novel software approaches used to implement the first such commercial simulator.
Summary A multiscale sequential fully implicit (MS SFI) reservoir simulation method implemented in a commercial simulator is applied to a set of reservoir engineering problems to understand its potential. Our assessment highlights workflows where the approach brings substantial performance advantages and insight generation. The understanding gained during commercialization on approximately 40 real-world models is illustrated through simpler but representative data sets, available in the public domain. The main characteristics of the method and key features of the implementation are briefly discussed. The robust fully implicit (FI) simulation method is used as a benchmark. The implementation of the MS SFI method is found to faithfully reproduce FI results for black-oil problems. We provide evidence and analysis of why the MS SFI approach can achieve high levels of performance and fidelity. The method supports the solution of unique problems that would benefit from incorporating multiscale geology and multiscale flow physics. The MS SFI implementation was used to successfully simulate a typical sector model used for field pilots at extremely high “whole core” scale resolution within a practical time frame leveraging high-performance computing (HPC). This could not be achieved with the FI approach. A combination of MS SFI and HPC offers immense potential to simulate geological models using grids to capture mesoscopic or laminar scale geology. The method, by design, demands fewer computing resources than FI, making it far more cost-effective to use for such high-resolution models. We conclude that the MS SFI method has a distinct capability to enhance reservoir engineering practice in the areas of high-resolutionsimulation-driven workflows in context of subsurface uncertainty quantification, field development planning, and reservoir performance optimization. NOTE: This paper is published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.
A new implementation of a multiscale sequential fully implicit (MS SFI) reservoir simulation method is applied to a set of reservoir engineering problems to understand its utility. An assessment is made to highlight areas where the approach brings substantial advantage in performance as well as address problems not successfully resolved by existing methods. This work makes use of the first ever implementation of the multiscale sequential fully implicit method in a commercial reservoir simulator. The key features of the method and implementation are briefly discussed. The learnings gained during field testing and commercialization on about forty real world models is illustrated through simpler, but representative data sets, available in the public domain. The workhorse robust fully implicit (FI) method is used as a reference for benchmarking. The MS SFI method can faithfully reproduce FI results for black oil problems. We conclude that the MS SFI method has the capability to support reservoir engineering decision making especially in the areas of subsurface uncertainty quantification, representative model selection, model calibration and optimization. The MS SFI method shows immense potential for handling prominent levels of reservoir heterogeneity. The challenge of including fine-scale heterogeneity, which is often overlooked, when scaling up EOR processes from laboratory to field, appears to have found a practical solution with a combination of MS SFI and high-performance computing (HPC).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.