Generating multiple history-matched reservoir models by stochastic sampling to quantify the uncertainty in oil recovery predictions has recently aroused interest in the industry. Coupling a stochastic sampling algorithm with a Bayesian analysis potentially allows incorporation of all sources of uncertainties including data, simulation and interpolation errors. However, the accuracy of the uncertainty estimations strongly depends on the sampling performance. In order to improve the robustness of the coupled Bayesian methodology, the factors that affect the accuracy of the estimations must be examined. This paper investigates how different sampling strategies affect the estimation of uncertainty in prediction of reservoir production. The sampling strategy involves the choice of algorithm and selection of algorithm parameters in sampling the high-dimensional parameter space. We present examples of using both the Neighbourhood Algorithm (NA) and a Genetic Algorithm (GA) to generate history-matched reservoir models for a real field case from the North Sea. Introduction Reservoir engineers often face the task of predicting the production performance of a subsurface hydrocarbon reservoir for decision making in field development. A computer model of the reservoir is constructed based on the geological information available, and the mathematical equations of the fluid flow in the reservoir are solved to simulate the reservoir response. The reservoir model is then calibrated by using the data observed in the field. Usually hydrocarbon production or reservoir pressure measurements from the wells are compared with the simulated results and the reservoir model is modified until a reasonable match between the simulated response and the observed data is achieved. This calibration process is called history-matching. The history-matched model is then simulated over the prediction period to obtain the recovery prediction. The data used to construct and calibrate the reservoir model are limited to the small-scale samples taken from sparse points (i.e. wells) in the field. Therefore, the model is always subject to interpolations at various scales, and so the recovery prediction based on the model is uncertain. Moreover, the errors in data measurements and the mathematics of reservoir modeling bring additional uncertainties. As a result of these uncertainties, the solution of the history-matching problem is not unique, i.e. multiple reservoir models can possibly match the history data equally well. Hence, recovery predictions are uncertain, although this uncertainty is not captured by a single history-matched model. To improve operational and investment decisions in reservoir management, the inherent uncertainty in predictions must be quantified, usually using the language of probability theory. The uncertainty is quantified by generating models that match observed measurements and are consistent with the known details of reservoir geology. The forecasts from these models then provide the probability distribution of the reservoir model response. In order to generate the models, an optimization (or sampling) technique is often employed, in which the discrepancy between the model response and the observed data is minimized by modifying the model parameters. The main challenges in the sampling process are the high number of unknown parameters, the existence of sharp local minima in the search space, and the high computational cost of reservoir simulation. These difficulties can have a detrimental effect on the accuracy of the sampling. This paper is concerned with the effect of any inaccuracies in the sampling process on the prediction uncertainty estimations. History Matching in a Bayesian Framework The Bayesian framework1 for statistical inference provides a formal and systematic procedure for updating current knowledge of a system on the basis of available data2. The basic formulation of Bayes theorem is given in Eq. (1).
Magnus is a high productivity oil field in the northern North Sea. First oil was produced in 1983 and the plateau of 150 MSTB/D ended in 1995. In the post-plateau period a variety of reservoir management techniques have been employed to arrest decline rate. Two major projects have been executed post-plateau: in 2002 a gas injection EOR scheme was initiated and more recently additional drilling slots were added to the platform to increase reservoir access. These major projects have rejuvenated the field development options and will enable significant oil production beyond the next decade. The EOR scheme exemplifies the synergies that are important to continuing to extract value in a mature oil province. The gas that is injected is sourced from several other fields and provides an export route for associated gas that would otherwise be stranded. After seven years of operation the impact of the EOR scheme is readily quantified, as are the challenges inherent in operating such a scheme. This paper describes the reservoir performance of EOR, discusses observations that may impact gas relative permeability under WAG operation, describes the tools that have supported reservoir management decisions and considers the operational issues that continually challenge delivery and the surveillance approach taken to mitigate these. The extended slots project used slot-splitter technology to increase the number of wells. Wells from the new slots are now the longest wells that have been drilled from the platform enabling recovery from targets at the periphery of the field. The combination of EOR and additional slots has generated a position where the field is once again opportunity-rich. Together with technology developments in 4D seismic and revised geological description a new tranche of opportunities are being identified that combine initially dry oil attic targets along with the underpinning volumes from EOR delivery.
TX 75083-3836, U.S.A., fax 01-972-952-9435. AbstractGenerating multiple history-matched reservoir models by stochastic sampling to quantify the uncertainty in oil recovery predictions has recently aroused interest in the industry. Coupling a stochastic sampling algorithm with a Bayesian analysis potentially allows incorporation of all sources of uncertainties including data, simulation and interpolation errors. However, the accuracy of the uncertainty estimations strongly depends on the sampling performance. In order to improve the robustness of the coupled Bayesian methodology, the factors that affect the accuracy of the estimations must be examined.This paper investigates how different sampling strategies affect the estimation of uncertainty in prediction of reservoir production. The sampling strategy involves the choice of algorithm and selection of algorithm parameters in sampling the high-dimensional parameter space. We present examples of using both the Neighbourhood Algorithm (NA) and a Genetic Algorithm (GA) to generate history-matched reservoir models for a real field case from the North Sea.
Résumé -Comment la stratégie de l'échantillonnage affecte-t-elle les estimations d'incertitude ? -Les techniques d'inférence bayésienne pour déterminer les incertitudes concernant le comportement d'un gisement demandent la création de multiples modèles conditionnés par les données du terrain. L'identification des bons modèles nécessite l'échantillonnage d'un espace de paramètres de grande dimension. Dès lors, la fiabilité de la technique d'inférence dépend fortement de l'efficacité des méthodes utilisées pour générer les modèles qui expliquent bien les données. Afin d'améliorer cette fiabilité, les facteurs qui affectent les estimations d'incertitude doivent être identifiés. Cette étude vise à examiner l'effet de différentes stratégies d'échantillonnage sur l'estimation des incertitudes de prédiction. Un modèle synthétique de gisement a été étudié pour comparer les estimations d'incertitude aux résultats de différents échantillonnages réalisés en utilisant les algorithmes génétiques (GA) et l'algorithme de voisinage (NA). Les principales différences des résultats d'échantillonnage de GA et NA sont le niveau d'exploration de l'espace de paramètres et le nombre de régions donnant des modèles s'ajustant correctement sur les données. Nous démontrons que différentes stratégies d'échantillonnage peuvent aboutir à des estimations d'incertitude significativement différentes. Nous démontrons égale-ment que la capacité prédictive des modèles obtenus par calage d'historique peut être utilisée comme un indicateur de la dispersion de la distribution ultérieure de probabilité a posteriori. Abstract -How Does Sampling Strategy Affect Uncertainty Estimations? -Bayesian inversion techniques for assessing reservoir performance uncertainties involve generating multiple reservoir models conditioned to the available field data. This process requires sampling in a high
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.