Traditional reconciliation of geomodels with production data is one of the most laborious tasks in reservoir engineering. The uncertainty associated with the great majority of model variables only adds to the overall complexity. This paper introduces an engineering workflow for probabilistic assisted history matching that captures inherent model uncertainty and allows for better quantification of production forecasts. The workflow is applied to history matching of the pilot area in a major, structurally complex Middle East (ME) carbonate reservoir. The simulation model combines 49 wells in five waterflood patterns to match 50 years of oil production and 12 years of water injection and to predict eight years of production. Initially, the reservoir model was calibrated to match oil production by modifying permeability and/or porosity at well locations and by fine-tuning rock-type properties and water saturation. The second level history match implemented two-stage Markov chain Monte Carlo (McMC) stochastic optimization to minimize the misfit in water cut on a well-by-well basis. While relative to evolutionary algorithms or the ensemble Kalman filter (EnKF), the McMC methods provide a statistically rigorous alternative for sampling posterior distribution; when deployed in direct simulation, they impose a high computational cost. The approach presented here accelerates the process by parameterizing the permeability using discrete cosine transform (DCT), constraining the proxy model using streamline-based sensitivities and utilizing parallel and cluster computing. While probabilistic assisted history matching (AHM) successfully reduced the misfit for most producing wells, the computational convergence was sensitive to the level of preserved geological detail. The optimal number of representative history-matched models was identified to capture the uncertainty in reservoir spatial connectivity using rigorous optimization and dynamic model ranking based on forecasted oil recovery factors (ORFs). The reduced set of models minimized the computational load for forecast-based analysis, while retaining the knowledge of the uncertainty in the recovery factor. The comprehensive probabilistic AHM workflow was implemented at the operator's North Kuwait Integrated Digital Oilfield (KwIDF) collaboration center. It delivers an optimized reservoir model for waterflood management and automatically updates the model quarterly with geological, production, and completion information. This allows engineers to improve the reservoir characterization and identify the areas that require more data capture.
History matching processes for complex and large reservoirs have always posed difficulties to reservoir engineers. To help reservoir engineers during history matching, various assisted history matching (AHM) algorithms have been developed. While AHM can help automate various aspects of history matching, oftentimes, algorithms suffer from slow convergence. This work proposes an ensemble based markov-chain Monte Carlo (MCMC) based algorithm with efficient sampling from the given distribution of properties. For efficient sampling properties during AHM, streamline trajectories are used to find the connection between source(s) and producer well. Streamline tracking based on output of the full-physics simulator is used as a guideline to capture the fluid flow patterns, and only properties of grid cells along the streamline trajectories are considered prime candidates for history matching. The proposed algorithm was applied to a sector model of a reservoir as a test case study to history match water cut on a well-by-well basis.
Implementing asset-wide intelligent digital oilfield (iDOF) solutions, aiming to optimize oil and gas production system in an "intelligent" manner, requires integrating concepts from different disciplines, such as artificial intelligence. Neural network- (NN-) based models are a form of artificial intelligence, which is a branch of computer science that generates mathematical models that can be "trained" to determine relationships between inputs and outputs, recognize patterns, and perform reliable short-term predictions. NN models using real-time data are proven tools for short-term well production forecasting with acceptable accuracy. An oilfield is a hostile environment for even the most robust instrumentation. As a result, technical outages or anomalies can result in lost or poor quality data. Experience shows that many samples in a real-time database are frozen, missing, corrupted, or incorrect. This fact represents the biggest challenge to creating a reliable NN model. However, the models can be trained to correct or estimate missing real-time data. This paper presents a case study where nodal analysis was used to populate missing data used to train NN model, thus improving the reliability of the model. Because nodal analysis is not suitable for prediction, time-series analysis was used to assess the impact of historical events, and operational conditions were used to forecast trends. The NN trained with nodal analysis can cover a wide variability spectrum and, when trained with a time-lapse series, can predict short-term (30-day) production scenarios by changing highly correlated parameters, such as tubing head pressure (THP) or frequency (Freq). This paper describes training NNs using nodal analysis and time-series analysis to predict short-term water cut (WC or BS&W) and liquid flow rate. This technique was applied in over 20 wells with electronic submersible pumps (ESPs) and gas lifts (GLs). The NNs made robust estimates of production rate and an acceptable prediction trend for 30 days, even when confronted with flow meter instrumentation failure, lost signals, and out-of-calibration instruments. Hence, the NN served as a "virtual meter," providing instantaneous and accurate estimation of production data.
Surveillance and optimization of waterfloods in low-permeability carbonate reservoirs pose many challenges. Updating reservoir models is tedious and time-consuming, involving multiple data sources, model updates, and simulations. Technological challenges include large simulation models, waterflood complexities, and limited real-time data. Human challenges must be addressed as well, since waterflooding decisions affect multiple disciplines: reservoir engineers, production engineers, facilities engineers, IT, Operations, and asset managers. The ‘languages’ and interests of these disciplines are quite different; necessitating a workflow that satisfies the needs of all disciplines and integrates people, processes, and technology. This paper presents an innovative automated workflow to enable monitoring, diagnostics, forecasting, and optimization of waterflooding processes in days instead of weeks. This workflow seamlessly captures historical and monthly real-time data, updates simulation history, creates simulation restart prediction points, runs numerical simulations with optimization scenarios, selects the global optimum solution by scenario, and compares results so that multi-disciplinary teams can make reactive or proactive decisions to maximize short-term oil rates and long-term oil recovery, while honoring constraints on voidage replacement ratios, reservoir pressure, sweep efficiencies, production, and injection. The workflow automatically updates real-time production data in the simulator each month. A base case is run to recalculate waterflooding indicators. The process then starts a 24-month production forecast, running hundreds of scenarios under constrained optimization to achieve global optimization points. The optimizer changes control variables such as injection volumes, tubing head pressure, bottomhole pressure, and production allowable. The workflow ranks potential well decisions with important impacts on oil rate and water cut. The workflow uses an intuitive user interface incorporating the needs of multiple engineering and operations disciplines, and facilitates one common language while evaluating and optimizing waterfloods. This workflow has been implemented for a waterflood in a Middle East carbonate reservoir to help engineers evaluate the waterflood and make better, faster decisions.
Production from low permeability carbonate reservoirs is commonly supported by the waterflood (WF) process. Despite decades of research and field experience, oil recovery expectations in carbonates typically remain low due to factors such as high permeability streaks, poor microscopic sweep efficiency, and low mobility ratios, all of which can dramatically impair oil recoveries. Vast amounts of remaining oil-in-place have led operators to analyze opportunities to improve WF management and production.This paper presents an approach to increase recovery and improve performance indicators in a low permeability carbonate reservoir. The main objective of this effort is to maximize short-term oil rates and long-term recovery while honoring target constraints on voidage-replacement ratio (VRR), reservoir pressure, sweep efficiencies, and production and injection rates. In essence, our approach seeks optimum WF solutions by coupling a full-field reservoir simulator with an adaptive, simulated annealing optimization engine.Predefined scenarios impose hard constraints on production and injection rates, field conditions, and well operating constraints. Voidage replacement ratios (VRR) and nominal pressure (Pn), volumetric sweep efficiency (Evol) and displacement efficiency (Ed) are soft constraints used to calculate design feasibility. Reservoir recovery and displacement efficiency performance indicators are pursued at different levels in the optimization loop. The outcome is that the following month's operational decisions are suggested by the optimizer. This paper describes how this optimization methodology provides improvements in short-term production rates of about 10-20% while enhancing oil recovery between 1-8%. This paper also discusses additional strategies to further improve oil recovery expectations using this automated workflow as the foundation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.