Time-lapse seismic technology has been successfully used in the petroleum industry. It provides important information regarding properties of the reservoir between and beyond the wells; 4D (time-lapse) seismic data is used as input for several processes, such as: well planning/completion; geological model constraining and reservoir simulation history matching. However, there are technical issues to be addressed before starting a 4D seismic project. Several geophysical studies use the chance of success concept to identify favorable cases; evaluating the seismic survey and the magnitude of seismic changes. However, from an engineering point of view it is important to evaluate the chances of business success, which relies on the use of the new information to identify infill well locations, increase the predictive capability of reservoir simulations, optimize the reservoir performance and develop well intervention programs. A 4D seismic project is considered an economic success if the impact on field operations generates more monetary benefit than the acquisition cost. The complex estimation should be based on field uncertainties and decision analysis. Given the importance and difficulty of predicting economic success, this paper presents a methodology to estimate the chance of success of a 4D seismic project from the engineering perspective applied to a synthetic model. The methodology was applied to a synthetic reservoir model to obtain a first estimate of the chance of success of a 4D seismic project for a specific production period. The process was performed in the developing phase and it comprises several methodologies such as: risk analysis, representative models selection, production strategy optimization and value of information concept. The presented methodology provides information of the chance of success of a 4D seismic project at a specific production period assisting the decision maker to evaluate the need for further analysis or to establish the acquisition or not of 4D seismic data. This decision is taken considering the acquisition cost, the increase of NPV due to new data and other influential factors. Thus, the main benefit derived from the acquisition of 4D seismic data was on the identification of remaining oil areas. Time-lapse seismic data has been successfully used in reservoir monitoring; several published cases report the success of its use in the improvement of production efficiency. But it is important to predict if acquiring new information will be profitable before the acquisition. This is a complex but essential part of time-lapse seismic reservoir management.
In petroleum engineering, simulation models are used in the reservoir performance prediction and in the decision making process. These models are complex systems, typically characterized by a vast number of input parameters. Usually the physical state of the reservoir is highly uncertain, and thus the appropriate parameters of the input choices. The uncertainty analysis often proceeds by first calibrating the simulator against observed production history and then using the calibrated model to forecast future well production. Most models go through a series of iterations before being judged to give an adequate representation of the physical system. This can be a difficult task since the input space to be searched may be high dimensional, the collection of outputs to be matched may be very large, and each single evaluation may take a long time. As the uncertainty analysis is complex and time consuming; in this paper, a stochastic representation of the computer model was constructed, called an emulator, to quantify the reduction in the parameter input space due to production data over different production periods. The emulator methodology used represents a powerful and general tool in the analysis of complex physical models such as reservoir simulators. Such emulation techniques have been successfully applied across a large number of scientific disciplines. The emulator methodology was applied to evaluate the production data capacity to identify uncertain reservoir physical features over the production period for a synthetic reservoir simulation model. The synthetic model was built to represent a region of an injector and related producers. In the case studied; thousands of realizations were required to identify certain physical reservoir features. This justifies the use of emulation and shows the importance of this technique for the identification of regions of feasible input parameters. Moreover, the impact on the input space reduction due to different production periods was determined. The emulator methodology used assists in carrying out tasks that require computationally expensive objective function evaluation, such as identifying regions of feasible input parameters; making predictions for future behavior of the physical system and investigating the reservoir behavior.
Reservoir monitoring has constantly used 4D seismic data and has become an important tool for reservoir management. However, from the economic perspective, the information is only valuable if it influences decisions. It is necessary to decide if the acquisition of new information will be useful for field management. Thus, a methodology was developed to estimate the chance of success of a 4D seismic project. The methodology is an iterative process; it quantifies the increase of the project's value through the optimization of the production strategy for several reservoir models. As the number of reservoir scenarios representing the problem can be high, the optimization process can be time consuming. Therefore, the selection of representative models is an important step of the methodology. The present study discusses the impact of the number of representative models on the expected value of information (EVOI) and chance of success results. The chance of success methodology was applied to a synthetic model to validate the results and show its benefits. The selection of the representative models was based on production and economic results from the reservoir scenarios. Five evaluations were performed; each one considers a different number of representative models. The expected value of information and chance of success varies depending on the number of representative models. However, the EVOI and chance of success results obtained stabilize as the number of representative models increases. The methodology estimates the variation of the expected benefits due to the acquisition of 4D seismic data. The results obtained support the decision-making process, making the methodology an important tool for reservoir management. The decision-making process must consider the current uncertain scenario, the engineering and geophysical studies and also the economic aspect. Measuring and, especially, predicting the economic impact of new information is complex. Thus, the chance of success methodology is a tool to assist the decision maker in the evaluation of new 4D seismic projects.
Summary When performing classic uncertainty reduction according to dynamic data, a large number of reservoir simulations need to be evaluated at high computational cost. As an alternative, we construct Bayesian emulators that mimic the dominant behavior of the reservoir simulator, and which are several orders of magnitude faster to evaluate. We combine these emulators within an iterative procedure that involves substantial but appropriate dimensional reduction of the output space (which represents the reservoir physical behavior, such as production data), enabling a more effective and efficient uncertainty reduction on the input space (representing uncertain reservoir parameters) than traditional methods, and with a more comprehensive understanding of the associated uncertainties. This study uses the emulation-based Bayesian history-matching (BHM) uncertainty analysis for the uncertainty reduction of complex models, which is designed to address problems with a high number of both input and output parameters. We detail how to efficiently choose sets of outputs that are suitable for emulation and that are highly informative to reduce the input-parameter space and investigate different classes of outputs and objective functions. We use output emulators and implausibility analysis iteratively to perform uncertainty reduction in the input-parameter space, and we discuss the strengths and weaknesses of certain popular classes of objective functions in this context. We demonstrate our approach through an application to a benchmark synthetic model (built using public data from a Brazilian offshore field) in an early stage of development using 4 years of historical data and four producers. This study investigates traditional simulation outputs (e.g., production data) and also novel classes of outputs, such as misfit indices and summaries of outputs. We show that despite there being a large number (2,136) of possible outputs, only very few (16) were sufficient to represent the available information; these informative outputs were used using fast and efficient emulators at each iteration (or wave) of the history match to perform the uncertainty-reduction procedure successfully. Using this small set of outputs, we were able to substantially reduce the input space by removing 99.8% of the original volume. We found that a small set of physically meaningful individual production outputs were the most informative at early waves, which once emulated, resulted in the highest uncertainty reduction in the input-parameter space, while more complex but popular objective functions that combine several outputs were only modestly useful at later waves. The latter point is because objective functions such as misfit indices have complex surfaces that can lead to low-quality emulators and hence result in noninformative outputs. We present an iterative emulator-based Bayesian uncertainty-reduction process in which all possible input-parameter configurations that lead to statistically acceptable matches between the simulated and observed data are identified. This methodology presents four central characteristics: incorporation of a powerful dimension reduction on the output space, resulting in significantly increased efficiency; effective reduction of the input space; computational efficiency, and provision of a better understanding of the complex geometry of the input and output spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.