Summary Ensemble-based history-matching methods have received much attention in reservoir engineering. In real applications, small ensembles are often used in reservoir simulations to reduce the computational costs. A small ensemble size may lead to ensemble collapse, a phenomenon in which the spread of the ensemble of history-matched reservoir models becomes artificially small. Ensemble collapse is not desired for an ensemble-based history-matching method because it not only deteriorates the capacity in uncertainty quantification, but also forces the ensemble-based method to later stop updating reservoir models. In practice, distance-based localization is thus introduced to tackle ensemble collapse. Distance-based localization works well in many problems. However, one prerequisite in using distance-based localization is that the observations have associated physical locations. In certain circumstances with complex observations, this may not be true, and it thus becomes challenging to apply distance-based localization. In this work, we propose a correlation-based adaptive localization scheme that does not rely on the physical locations of the observations. Instead, we use the spatial distributions of the correlations between model variables and the corresponding simulated observations. In the course of history matching, we update model variables by only using the observations that have relatively high correlations with them, while excluding those that have relatively low correlations. This is equivalent to introducing a data-selection procedure to the history-matching algorithm. As a result, the threshold values for data selection play an essential role in the proposed adaptive localization scheme, and we develop both ideal and practical approaches to the choices of the threshold values. We demonstrate the efficacy of the proposed localization scheme using seismic history-matching problems—one 2D and one 3D—in which ensemble collapse is severe in the presence of large amounts of observational data, but distance-based localization may not be applicable because of the lack of physical locations of the seismic data in use. In contrast, correlation-based localization works well to prevent ensemble collapse and also renders good history-matching results. We also note some practical conveniences of the proposed localization scheme, including the applicability to nonlocal observations, the relative simplicity in implementation, the transferability of the same codes among different (either 2D or 3D) case studies, and the adaptivity to different types of observations and petrophysical parameters.
In this work we propose an ensemble 4D seismic history matching framework for reservoir characterization. Compared to similar existing frameworks in reservoir engineering community, the proposed one consists of some relatively new ingredients, in terms of the type of seismic data in choice, wavelet multiresolution analysis for the chosen seismic data and related data noise estimation, and the use of recently developed iterative ensemble history matching algorithms. Typical seismic data used for history matching, such as acoustic impedance, are inverted quantities, whereas extra uncertainties may arise during the inversion processes. In the proposed framework we avoid such intermediate inversion processes. In addition, we also adopt wavelet-based sparse representation to reduce data size. Concretely, we use intercept and gradient attributes derived from amplitude versus angle (AVA) data, apply multilevel discrete wavelet transforms (DWT) to attribute data, and estimate noise level of resulting wavelet coefficients. We then select the wavelet coefficients above a certain threshold value, and history-match these leading wavelet coefficients using an iterative ensemble smoother. As a proof-of-concept study, we apply the proposed framework to a 2D synthetic case originated from a 3D Norne field model. The reservoir model variables to be estimated are permeability (PERMX) and porosity (PORO) at each active gridblock. A rock physics model is used to calculate seismic parameters (velocity and density) from reservoir properties (porosity, fluid saturation and pressure), then reflection coefficients are generated using a linearized AVA equation that involves velocity and density. AVA data are obtained by computing the convolution between reflection coefficients and a Ricker wavelet function. The multiresolution analysis applied to the AVA attributes helps to obtain a good estimation of noise level and substantially reduce the data size. We compare history matching performance in three scenarios: (S1) with production data only, (S2) with seismic data only, and (S3) with both production and seismic data. In either scenario S2 or S3, we also inspect two sets of experiments, one using the original seismic data (full-data experiment) and the other adopting sparse representations (sparse-data experiment). Our numerical results suggest that, in this particular case study, using production data largely improves the estimation of permeability, but has little effect on the estimation of porosity. Using seismic data only improves the estimation of porosity, but not that of permeability. In contrast, using both production and 4D seismic data improves the estimation accuracies of both porosity and permeability. Moreover, in either scenario S2 or S3, provided that a certain stopping criterion is equipped in the iterative ensemble smoother, adopting sparse representations results in better history matching performance than using the original data set.
Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data (observations) to obtain improved model estimates. Ensemble-based methods are among the state-of-the-art assimilation algorithms in the data assimilation community. When applying ensemble-based methods to assimilate big geophysical data, substantial computational resources are needed in order to compute and/or store certain quantities (e.g., the Kalman-gain-type matrix), given both big model and data sizes. In addition, uncertainty quantification of observational data, e.g., in terms of estimating the observation error covariance matrix, also becomes computationally challenging, if not infeasible. To tackle the aforementioned challenges in the presence of big data, in a previous study, the authors proposed a wavelet-based sparse representation procedure for 2D seismic data assimilation problems (also known as history matching problems in petroleum engineering). In the current study, we extend the sparse representation procedure to 3D problems, as this is an important step towards real field case studies. To demonstrate the efficiency of the extended sparse representation procedure, we apply an ensemble-based seismic history matching framework with the extended sparse representation procedure to a 3D benchmark case, the Brugge field. In this benchmark case study, the total number of seismic data is in the order of . We show that the wavelet-based sparse representation procedure is extremely efficient in reducing the size of seismic data, while preserving the salient features of seismic data. Moreover, even with a substantial data-size reduction through sparse representation, the ensemble-based seismic history matching framework can still achieve good estimation accuracy.
Summary In this paper, we use a combination of acoustic impedance and production data for history matching the full Norne Field. The purpose of the paper is to illustrate a robust and flexible work flow for assisted history matching of large data sets. We apply an iterative ensemble-based smoother, and the traditional approach for assisted history matching is extended to include updates of additional parameters representing rock clay content, which has a significant effect on seismic data. Further, for seismic data it is a challenge to properly specify the measurement noise, because the noise level and spatial correlation between measurement noise are unknown. For this purpose, we apply a method based on image denoising for estimating the spatially correlated (colored) noise level in the data. For the best possible evaluation of the workflow performance, all data are synthetically generated in this study. We assimilate production data and seismic data sequentially. First, the production data are assimilated using traditional distance-based localization, and the resulting ensemble of reservoir models is then used when assimilating seismic data. This procedure is suitable for real field applications, because production data are usually available before seismic data. If both production data and seismic data are assimilated simultaneously, the high number of seismic data might dominate the overall history-matching performance. The noise estimation for seismic data involves transforming the observations to a discrete wavelet domain. However, the resulting data do not have a clear spatial position, and the traditional distance-based localization schemes used to avoid spurious correlations and underestimated uncertainty (because of limited ensemble size), are not possible to apply. Instead, we use a localization scheme that is based on correlations between observations and parameters that does not rely on physical position for model variables or data. This method automatically adapts to each observation and iteration. The results show that we reduce data mismatch for both production and seismic data, and that the use of seismic data reduces estimation errors for porosity, permeability, and net-to-gross ratio (NTG). Such improvements can provide useful information for reservoir management and planning for additional drainage strategies.
We find that conventional time-lapse seismic pressure-saturation discrimination methods become unstable for high [Formula: see text] ratios. Using first-order approximations in the amplitude variation with offset (AVO) gradient and intercept changes and in the rock-physics models increase the inaccuracy in pressure-saturation changes estimates. We propose a new method, based on a stepwise linear approximation to the intercept and gradient reflectivity changes, to estimate pressure and saturation changes. The applicability of the new method is tested on synthetic data over a range of 0%–50% gas saturation and 0–3.5 MPa pore-pressure changes. The new method is more consistent and provides better estimates compared with conventional methods. In the presence of random noise (up to 15%), the estimates are reasonably better for the new method. We use this new method to investigate the feasibility of pressure-saturation discrimination for a shallow unconsolidated sand reservoir in which an underground blowout occurred in 1989. We analyze a sand layer at 490-m depth that was charged by gas due to the blowout. Because of the shallow depth, the [Formula: see text] is expected to be higher than 2 before the blowout. Near- and far-offset time-lapse seismic data sets from 1988 and 1990 are used as input to estimate changes in AVO intercept and gradient and then changes in pressure and saturation. We find that the new method estimates more realistic pressure-saturation changes compared to the conventional one.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.