Abstract:Accurately characterizing fractures is complex. Several studies have proposed reducing uncertainty by incorporating fracture characterization into simulations, using a probabilistic approach, to maintain the geological consistency, of a range of models instead of a single matched model. We propose a new methodology, based on one of the steps of a general history-matching workflow, to reduce uncertainty of reservoir attributes in naturally fractured reservoirs. This methodology maintains geological consistency … Show more
“…Then, all simulation models are simulated and the models are prepared for the data assimilation step. To carry out the data assimilation in step 5, several methods are available, according to the complexity of data (Costa et al 2018;Davolio and Schiozer 2018;Maschio and Schiozer 2016;Oliveira et al 2018;Mahjour et al 2020c). During this step, a subset of generated simulation models is selected based on the past reservoir performance (Gaspar et al 2016).…”
Section: Distance-based Clustering With Simple Matching Coefficient (Dcsmc) Methodsmentioning
The simulation process under uncertainty needs numerous reservoir models that can be very time-consuming. Hence, selecting representative models (RMs) that show the uncertainty space of the full ensemble is required. In this work, we compare two scenario reduction techniques: (1) Distance-based Clustering with Simple Matching Coefficient (DCSMC) applied before the simulation process using reservoir static data, and (2) metaheuristic algorithm (RMFinder technique) applied after the simulation process using reservoir dynamic data. We use these two methods as samples to investigate the effect of static and dynamic data usage on the accuracy and rate of the scenario reduction process focusing field development purposes. In this work, a synthetic benchmark case named UNISIM-II-D considering the flow unit modelling is used. The results showed both scenario reduction methods are reliable in selecting the RMs from a specific production strategy. However, the obtained RMs from a defined strategy using the DCSMC method can be applied to other strategies preserving the representativeness of the models, while the role of the strategy types to select the RMs using the metaheuristic method is substantial so that each strategy has its own set of RMs. Due to the field development workflow in which the metaheuristic algorithm is used, the number of required flow simulation models and the computational time are greater than the workflow in which the DCSMC method is applied. Hence, it can be concluded that static reservoir data usage on the scenario reduction process can be more reliable during the field development phase.
“…Then, all simulation models are simulated and the models are prepared for the data assimilation step. To carry out the data assimilation in step 5, several methods are available, according to the complexity of data (Costa et al 2018;Davolio and Schiozer 2018;Maschio and Schiozer 2016;Oliveira et al 2018;Mahjour et al 2020c). During this step, a subset of generated simulation models is selected based on the past reservoir performance (Gaspar et al 2016).…”
Section: Distance-based Clustering With Simple Matching Coefficient (Dcsmc) Methodsmentioning
The simulation process under uncertainty needs numerous reservoir models that can be very time-consuming. Hence, selecting representative models (RMs) that show the uncertainty space of the full ensemble is required. In this work, we compare two scenario reduction techniques: (1) Distance-based Clustering with Simple Matching Coefficient (DCSMC) applied before the simulation process using reservoir static data, and (2) metaheuristic algorithm (RMFinder technique) applied after the simulation process using reservoir dynamic data. We use these two methods as samples to investigate the effect of static and dynamic data usage on the accuracy and rate of the scenario reduction process focusing field development purposes. In this work, a synthetic benchmark case named UNISIM-II-D considering the flow unit modelling is used. The results showed both scenario reduction methods are reliable in selecting the RMs from a specific production strategy. However, the obtained RMs from a defined strategy using the DCSMC method can be applied to other strategies preserving the representativeness of the models, while the role of the strategy types to select the RMs using the metaheuristic method is substantial so that each strategy has its own set of RMs. Due to the field development workflow in which the metaheuristic algorithm is used, the number of required flow simulation models and the computational time are greater than the workflow in which the DCSMC method is applied. Hence, it can be concluded that static reservoir data usage on the scenario reduction process can be more reliable during the field development phase.
“…Data assimilation: history match and reduce the number of scenarios with dynamic and seismic data. Several techniques are available (Avansi and Schiozer, 2015a;Bertolini et al, 2015;Costa et al, 2018;Davolio and Schiozer, 2018;Maschio and Schiozer, 2008, 2016Oliveira et al, 2018) depending on the complexity of the case and the available data. From the accepted models, a Base Case is selected for the following steps (Base1).…”
This work describes a new methodology for integrated decision analysis in the development and management of petroleum fields considering reservoir simulation, risk analysis, history matching, uncertainty reduction, representative models, and production strategy selection under uncertainty. Based on the concept of closed-loop reservoir management, we establish 12 steps to assist engineers in model updating and production optimization under uncertainty. The methodology is applied to UNISIM-I-D, a benchmark case based on the Namorado field in the Campos Basin, Brazil. The results show that the method is suitable for use in practical applications of complex reservoirs in different field stages (development and management). First, uncertainty is characterized in detail and then scenarios are generated using an efficient sampling technique, which reduces the number of evaluations and is suitable for use with numerical reservoir simulation. We then perform multi-objective history-matching procedures, integrating static data (geostatistical realizations generated using reservoir information) and dynamic data (well production and pressure) to reduce uncertainty and thus provide a set of matched models for production forecasts. We select a small set of Representative Models (RMs) for decision risk analysis, integrating reservoir, economic and other uncertainties to base decisions on risk-return techniques. We optimize the production strategies for (1) each individual RM to obtain different specialized solutions for field development and (2) all RMs simultaneously in a probabilistic procedure to obtain a robust strategy. While the second approach ensures the best performance under uncertainty, the first provides valuable insights for the expected value of information and flexibility analyses. Finally, we integrate reservoir and production systems to ensure realistic production forecasts. This methodology uses reservoir simulations, not proxy models, to reliably predict field performance. The proposed methodology is efficient, easy-to-use and compatible with real-time operations, even in complex cases where the computational time is restrictive.
“…Regarding history matching problem, Tolstukhin et al (2012) applied a sensitivity analysis to a portion of the Ekofisk field (North Sea, south of Norway) and identified the eight most important attributes for history matching, six being fracture related: fracture distribution, orientation, width, width-to-length ratio, permeability, and density. Costa et al (2018) applied an Iterative Sensitivity Analysis (ISA) approach on an uncertainty reduction of global attributes of a complex naturally fractured reservoir.…”
Section: Introductionmentioning
confidence: 99%
“…This approach is the ISA. An example of application is given by Costa et al (2018), where they applied the ISA approach on an uncertainty reduction of global attributes of a complex naturally fractured reservoir. The high uncertainty of strongly influential attributes might disguise the influence of others (less influential).…”
History matching for naturally fractured reservoirs is challenging because of the complexity of flow behavior in the fracture-matrix combination. Calibrating these models in a history-matching procedure normally requires integration with geostatistical techniques (Big Loop, where the history matching is integrated to reservoir modeling) for proper model characterization. In problems involving complex reservoir models, it is common to apply techniques such as sensitivity analysis to evaluate and identify most influential attributes to focus the efforts on what most impact the response. Conventional Sensitivity Analysis (CSA), in which a subset of attributes is fixed at a unique value, may over-reduce the search space so that it might not be properly explored. An alternative is an Iterative Sensitivity Analysis (ISA), in which CSA is applied multiple times throughout the iterations. ISA follows three main steps: (a) CSA identifies Group i of influential attributes (i = 1, 2, 3, …, n); (b) reduce uncertainty of Group i, with other attributes with fixed values; and (c) return to step (a) and repeat the process. Conducting CSA multiple times allows the identification of influential attributes hidden by the high uncertainty of the most influential attributes. In this work, we assess three methods: Method 1 – ISA, Method 2 – CSA, and Method 3 – without sensitivity analysis, i.e., varying all uncertain attributes (larger searching space). Results showed that the number of simulation runs for Method 1 dropped 24% compared to Method 3 and 12% to Method 2 to reach a similar matching quality of acceptable models. In other words, Method 1 reached a similar quality of results with fewer simulations. Therefore, ISA can perform as good as CSA demanding fewer simulations. All three methods identified the same five most influential attributes of the initial 18. Even with many uncertain attributes, only a small percentage is responsible for most of the variability of responses. Also, their identification is essential for efficient history matching. For the case presented in this work, few fracture attributes were responsible for most of the variability of the responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.