SUMMARYWe consider robust ensemble-based multi-objective optimization using a hierarchical switching algorithm for combined long-term and short term water flooding optimization. We apply a modified formulation of the ensemble gradient which results in improved performance compared to earlier formulations. We also apply multi-dimensional scaling to visualize projections of the high-dimensional search space, to aid in understanding the complex nature of the objective function surface and the performance of the optimization algorithm. This provides insights into the quality of the gradient, and confirms the presence of ridges in the objective function surface which can be exploited for multi-objective optimization. We used a 18553-gridblock reservoir model of a channelized reservoir with 4 producers and 8 injectors. The controls were the flow rates in the injectors, and the long-term and short-term objective functions were undiscounted net present value (NPV) and highly discounted (25%) NPV respectively. We achieved an increase of 15.2% in the secondary objective for a decrease of 0.5% in the primary objective, averaged over 100 geological realizations. The total number of reservoir simulations was around 20000, which indicates the potential to use the ensemble optimization method for robust multi-objective optimization of medium-sized reservoir models.
[1] Three methods to estimate ocean geostrophic surface currents from satellite altimetry measurements are evaluated for several single-and multiple-satellite configurations, with specific emphasis on resulting uncertainties. Altimetric sea surface height measurements are simulated by sampling along satellite ground tracks the surface pressure output from the 1/10°North Atlantic run of the Los Alamos Parallel Ocean Program model and by subsequently adding realistic instrument and orbit errors. The effects of both sampling and data errors on the velocity estimates are discussed. The satellite orbit configurations considered represent current missions or candidates for future coordinated tandem missions. Data error budgets are based on those of existing missions and on estimates for new altimetric technology currently under development. In midlatitude regions characterized by strong variability, such as the Gulf Stream region, velocities estimated at crossovers of interleaved tracks, and along a virtual ground track between two parallel tracks with a 0.75°zonal offset, are found to be comparable in accuracy and more accurate than velocities estimated from optimally interpolated sea surface height maps. Error variances as low as 15-25% of the local signal variance can be obtained from all three methods near the Gulf Stream core. Larger relative errors are found almost everywhere else with the exact details of the error in the two velocity components depending on data error, orbit configuration, latitude, estimation method, and smoothing. Several scientific applications of the configurations and methods are discussed, including the estimation of Reynolds stresses, momentum fluxes, velocity spectra, and covariance functions. Accuracy and applicability suggest that the newly proposed parallel track configuration is a viable option for future tandem missions.
With an increase in the number of applications of ensemble optimization (EnOpt) for production optimization, the theoretical understanding of the gradient quality has received little attention. An important factor that influences the quality of the gradient estimate is the number of samples. In this study we use principles from statistical hypothesis testing to quantify the number of samples needed to estimate an ensemble gradient that is comparable in quality to an accurate adjoint gradient. We develop a methodology to estimate the necessary ensemble size to obtain an approximate gradient that is within a predefined angle compared to the adjoint gradient, with a predefined statistical confidence. The method is first applied to the Rosenbrock function (a standard optimization test problem), for a single realization, and subsequently for a case with uncertainty, represented by multiple realizations (robust optimization). The maximum allowed error applied in both experiments is a 10° angle between the directions of the EnOpt gradient and the exact gradient. For the single-realization case we need, depending on the perturbation size, 900, 5 and 3 samples to estimate a "good" gradient with 95% confidence at 50 points in the optimization space for 50 different random sequences. For the robust case, the conventional EnOpt approach is to couple one model realization with one control sample, which leads to a computationally efficient technique to estimate a mean gradient. However, our results show that in order to be 95% confident the original one-to-one model realization to control sample ratio formulation is not sufficient. To achieve the required confidence requires a ratio of 1:1100, i.e. each model realization is paired with 1100 control samples using the original formulation. However, using a modified formulation we need a ratio of 1:10 to stay within the maximum allowed error for 95% of the points in space, though a 1:1 ratio is sufficient for 85% of the points. We also tested our methodology on a reservoir case for deterministic and robust cases, where we observe similar trends in the results. Our results provide insight into the necessary number of samples required for EnOpt, in particular for robust optimization, to achieve a gradient comparable to an adjoint gradient.
A reliable estimate of reservoir pressure and fluid saturation changes from time-lapse seismic data is difficult to obtain. Existing methods generally suffer from leakage between the estimated parameters. We propose a new method using different combinations of time-lapse seismic attributes based on four equations: two expressing changes in prestack AVO attributes (zero-offset and gradient reflectivities), and two expressing poststack time-shifts of compressional and shear waves as functions of production-induced changes in fluid properties. The effect of using different approximations of these equations was tested on a realistic, synthetic reservoir, where seismic data have been simulated during the 30-year lifetime of a water-flooded oil reservoir. Results found the importance of the porosity in the inversion with a clear attenuation of the porosity imprint on the final estimates in case the porosity field or the vertically averaged porosity field is known a priori. The use of a first-order approximation of the gradient reflectivity equation leads to severely biased estimates of changes in saturation and leakage between the two different parameters. Both the bias and the leakage can be reduced, if not eliminated, by including higher-order terms in the description of the gradient, or by replacing the gradient equation with P-and/or S-wave time-shift data. The final estimates are relatively robust to random noise, as they present fairly high accuracy in the presence of white noise with a standard deviation of 15%. The introduction of systematic noise decreases the inversion accuracy more severely.
SUMMARYThe traditional analysis scheme in the Ensemble Kalman Filter (EnKF) uses a stochastic perturbation or randomization of the measurements which ensures a correct variance in the updated ensemble. An alternative so-called deterministic analysis algorithm is based on a square-root formulation where the perturbation of measurements is avoided. Experiments with simple models have indicated that ensemble collapse is likely to occur when deterministic filters are applied to nonlinear problems. In this paper the properties of stochastic and deterministic ensemble analysis algorithms are evaluated in an identical-twin experiment using an ocean general-circulation model. In particular, the implications of the use of deterministic Ensemble Square-Root Filters (EnSRF) for ensemble distribution are investigated. An explanation is presented for the observed collapse, and a simple solution based on randomization of the analysis ensemble anomalies is examined. A one-year assimilation run with this improved EnSRF is found to produce Gaussian distributions, similar to the EnKF.
In an earlier study two hierarchical multi-objective methods were suggested to include short-term targets in life-cycle production optimization. However this earlier study has two limitations: 1) the adjoint formulation is used to obtain gradient information, requiring simulator source code access and an extensive implementation effort, and 2) one of the two proposed methods relies on the Hessian matrix which is obtained by a computationally expensive method. In order to overcome the first of these limitations, we used ensemble-based optimization (EnOpt).EnOpt does not require source code access and is relatively easy to implement. To address the second limitation, we used the Broyden-Flecther-Goldfarb-Shanno (BFGS) algorithm to obtain an approximation of the Hessian matrix. We performed experiments in which a water flood was optimized in a geologically realistic multi-layer sector model. The controls were inflow control valve settings at pre-defined time intervals. Undiscounted Net Present Value (NPV) and highly discounted NPV were the long-term and short-term objective functions used. We obtained an increase of approximately 14% in the secondary objective for a decrease of only 0.2-0.5% in the primary objective. The study demonstrates that ensemble-based hierarchical multi-objective optimization can achieve results of practical value in a computationally efficient manner.
Ensemble-based optimization has recently received great attention as a potentially powerful technique for life-cycle production optimization, which is a crucial element of reservoir management. Recent publications have increased both the number of applications and the theoretical understanding of the algorithm. However, there is still ample room for further development since most of the theory is based on strong assumptions. Here, the mathematics (or statistics) of Ensemble Optimization is studied, and it is shown that the algorithm is a special case of an already well-defined natural evolution strategy known as Gaussian Mutation. A natural description of uncertainty in reservoir management arises from the use of an ensemble of history-matched geological realizations. A logical step is therefore to incorporate this uncertainty description in robust life-cycle production optimization through the expected objective function value. The expected value is approximated with the mean over all geological realizations. It is shown that the frequently advocated strategy of applying a different control sample to each reservoir realization delivers an unbiased estimate of the gradient of the expected objective function. However, this procedure is more variance prone than the deterministic strategy of applying the entire ensemble of perturbed control samples to each reservoir model realization. In order to reduce the variance of the gradient estimate, an importance sampling algorithm is proposed and tested on a toy problem with increasing dimensionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.