The classical ensemble Kalman filter (EnKF) is known to underestimate the prediction uncertainty. This can potentially lead to low forecast precision and an ensemble collapsing into a single realisation. In this paper, we present alternative EnKF updating schemes based on shrinkage methods known from multivariate linear regression. These methods reduce the effects caused by collinear ensemble members and have the same computational properties as the fastest EnKF algorithms previously suggested. In addition, the importance of model selection and validation for prediction purposes is investigated, and a model selection scheme based on cross-validation is introduced. The classical EnKF scheme is compared with the suggested procedures on two-toy examples and one synthetic reservoir case study. Significant improvements are seen, both in terms of forecast precision and prediction uncertainty estimates.
The ensemble Kalman filter (EnKF) provides an approximate, sequential Monte Carlo solution to the recursive data assimilation algorithm for hidden Markov chains. The challenging conditioning step is approximated by a linear updating, and the updating weights, termed Kalman weights, are inferred from the ensemble members. The EnKF scheme is known to provide unstable predictions and to underestimate the prediction intervals, and even sometimes to diverge. The underlying cause for these shortcomings is poorly understood. We find that the ensemble members couple in the conditioning procedure and that the coupling increase multiplicatively in the recursive conditioning steps. Under reasonable Gauss‐independence assumptions, exact expressions for this correlation are developed. Moreover, expressions for the precision of the predictions and the downward bias in the empirical variance introduced in one conditioning step are found. These results are confirmed by a Gauss‐linear simulation study. Furthermore, we quantitatively evaluate an alternative, improved EnKF scheme on the basis of transformations of ensemble members under the same Gauss‐independent assumptions. The scheme is compared with the frequently used ensemble inflation scheme.
This paper describes the workflow for an uncertainty study for a field development case offshore Norway. It is a workflow where uncertainties in seismic time interpretation, depth conversion, contacts, fault scenarios, alternative conceptual models, facies models, relative permeability, schedule etc. are included in an automated way to generate multiple realizations (hundreds) of the geomodel and reservoir model. Hence all uncertainties in the geomodel are also included in the reservoir model explicitly, and are captured by the ensemble of realizations. Since structure, contacts and facies distribution are different from realization to realization the workflow ensures that the planned well trajectories are automatically adjusted accordingly. Having such a workflow which can generate and manage multiple realizations makes it straight forward to get robust and realistic estimates of uncertainties in in-place volumes and produced volumes. It is also used for risk mitigation and decision support as e.g. to evaluated robustness of well placement, well number, top side capacities etc. This automatic workflow made it easy to rerun the uncertainty study when new well-data arrived. It made it also easy to run sensitivities on any part of the reservoir modelling workflow to gain valuable insight. Furthermore, having such a workflow made it possible to do quick and simple soft conditioning on dynamic data (such as drill stem test data) or alternatively use the dynamic data as direct input to the geomodel. Few real data from the field under study will be included. Those who are included are anonymized and scales on axes are masked.
Seismic history matching (HM) has attracted increasing attention the last few years. As more repeated seismic surveys are acquired, the more apparent the shortcomings in modern HM tools and algorithms become. A common conception seems to be that the amount of data represented by geophysical observations and the complexity of working with 3D fields make the updating procedure hard. We investigate the nature of geophysical observations from a HM point of view by testing several data reduction techniques such as Principal Component Analysis (PCA), regression techniques such as forward stepwise, as well as state-of-the-art techniques based on neural networks. We argue that simulated geophysical fields from the prior models are prone with spatial correlations and that their information content and effective dimensionality is much smaller than the dimensionality of the observed field. The techniques are tested on a reservoir model of an anonymous North Sea oil field, using the seismic time shift, i.e. the difference travel time integrated over the reservoir between two surveys. We find that PCA is particularly promising, resulting from the versatility and robustness of the method. In practice this means that high dimensional geophysical data, e.g. 2-D seismic images or 3-D seismic cubes, can often be described using only a handful of scalars. We show how to assess the information content in the data, compress the data, and use this compressed data in a reservoir conditioning setting. The methods we present are generic; they apply equally well to all geophysical attributes regardless of representation and can be applied with any history matching algorithm, although they are mainly designed for ensemble based techniques.
When considering the task of creating reservoir models for fields under development, dynamic data measurements often have limited impact compared with static (geophysical and geological) data. This is not necessarily true for the Johan Sverdrup field offshore Norway, where exceptional reservoir properties make the data from eight drill-stem tests (DSTs) particularly interesting. For this reason, it is important to utilize the information found in the collected static and dynamic data in a consistent manner, to improve the understanding of the reservoir. This is especially true for the Avaldsnes High area, located in the southeastern part of the Johan Sverdrup field, where the observed thickness is below the seismic resolution, and the DST data from four wells indicate permeabilities in the range of 20 to 80 Darcy, with an overlapping radius of investigation. In this paper, we apply an ensemble-based approach to generate a large set of reservoir models for the Avaldsnes High area of the Johan Sverdrup field, all of which are plausible given the current observed static and dynamic data. We consider multiple modelling scenarios, introducing uncertainty in the sand thickness, facies (rock type) description and the permeability modelling. Unlike conventional pressure transient analysis (PTA), where we analyze the DSTs separately, and the non-uniqueness in the data interpretation is hard to address and quantify, this is not the case with the ensemble-based approach. Since we conduct the static and dynamic data conditioning simultaneously, we can consistently address possible ambiguities in interpreted permeabilities, thicknesses and flow barriers seen in the conventional PTA analysis. The study reveals that by conditioning the generated models to dynamic data we introduce clear spatial trends in both the sand thickness and permeability. In particular, we greatly reduce the potential downside with respect to the sand thickness in the Avaldsnes High area.
Summary The ensemble Kalman filter (EnKF) is a sequential Monte Carlo method for solving nonlinear spatiotemporal inverse problems, such as petroleum-reservoir evaluation, in high dimensions. Although the EnKF has seen successful applications in numerous areas, the classical EnKF algorithm can severely underestimate the prediction uncertainty. This can lead to biased production forecasts and an ensemble collapsing into a single realization. In this paper, we combine a previously suggested EnKF scheme based on dimension reduction in the data space, with an automatic cross-validation (CV) scheme to select the subspace dimension. The properties of both the dimension reduction and the CV scheme are well known in the statistical literature. In an EnKF setting, the former can reduce the effects caused by collinear ensemble members, while the latter can guard against model overfitting by evaluating the predictive capabilities of the EnKF scheme. The model-selection criterion traditionally used for determining the subspace dimension, on the other hand, does not take the predictive power of the EnKF scheme into account, and can potentially lead to severe problems of model overfitting. A reservoir case study is used to demonstrate that the CV scheme can substantially improve the reservoir predictions with associated uncertainty estimates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.