Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Gas injection pressure-volume-temperature (PVT) laboratory data play an important role in assessing the efficiency of enhanced oil recovery (EOR) processes. Although typically there is a large conventional PVT data set, gas injection laboratory studies are relatively scarce. On the other hand, performing EOR laboratory studies may be either unnecessary in the case of EOR screening, or unfeasible in the case when reservoir fluid composition at current conditions is different from initial conditions. Given that gas injection is to be widely assessed as an optimal EOR process, there is increased demand on time- and cost-effective solutions to predict the outcome of associated gas injection lab experiments. While machine learning (ML) is extensively used to predict black-oil properties, it is not the case for compositional reservoir properties, including those related to gas injection. Can we use the typically extensive conventional laboratory data to help predict the needed gas injection parameters? This is the core of this paper. We present an ML-based solution that predicts pertinent gas injection studies from known fluid properties such as fluid composition and black oil properties. That is, learning from samples with gas injection laboratory studies and predicting gas injection fluid parameters for the remaining, much larger, data set. We applied the proposed algorithms on an extensive corporate-wide database. Swelling tests were predicted using the trained ML models for samples lacking gas injection laboratory data. Several ML models were tested, and results were analyzed to select the most optimal one. We present the algorithms and the associated results. We discuss associated challenges and applicability of the proposed models for other fields and data sets.
Gas injection pressure-volume-temperature (PVT) laboratory data play an important role in assessing the efficiency of enhanced oil recovery (EOR) processes. Although typically there is a large conventional PVT data set, gas injection laboratory studies are relatively scarce. On the other hand, performing EOR laboratory studies may be either unnecessary in the case of EOR screening, or unfeasible in the case when reservoir fluid composition at current conditions is different from initial conditions. Given that gas injection is to be widely assessed as an optimal EOR process, there is increased demand on time- and cost-effective solutions to predict the outcome of associated gas injection lab experiments. While machine learning (ML) is extensively used to predict black-oil properties, it is not the case for compositional reservoir properties, including those related to gas injection. Can we use the typically extensive conventional laboratory data to help predict the needed gas injection parameters? This is the core of this paper. We present an ML-based solution that predicts pertinent gas injection studies from known fluid properties such as fluid composition and black oil properties. That is, learning from samples with gas injection laboratory studies and predicting gas injection fluid parameters for the remaining, much larger, data set. We applied the proposed algorithms on an extensive corporate-wide database. Swelling tests were predicted using the trained ML models for samples lacking gas injection laboratory data. Several ML models were tested, and results were analyzed to select the most optimal one. We present the algorithms and the associated results. We discuss associated challenges and applicability of the proposed models for other fields and data sets.
Flash calculation is an essential step in compositional reservoir simulation. However, it consumes a significant part of the simulation process, leading to long runtimes that may jeopardize on-time decisions. This is especially obvious in large reservoirs with many wells. In this paper we describe the use of a machine-learning- (ML) based flash-calculation model as a novel approach for novel thermodynamics via this ML framework to potentially accelerate compositional reservoir simulation. The hybrid compositional simulation protocol uses an artificial-intelligence- (AI) based flash model as an alternative to a thermodynamic-based phase behavior of hydrocarbon fluid, while fluid-flow equations in the porous medium are handled using a conventional approach. The ML model capable of performing accurate flash calculations is integrated into a reservoir simulator. Because flash calculations are time consuming, this can lead to instability issues; using the ML algorithm to replace this step results in a faster runtime and enhanced stability. The initial stage in training ML models consists of creating a synthetic flash data set with a wide range of composition and pressure. An automated workflow is developed to build a large flash data set that mimics the fluid behavior and pressure depletion in the reservoir using one or more fluid samples in a large pressure-volume-temperature (PVT) database. For each sample, a customized equation of state (EOS) is built based on which constant volume depletion (CVD) or differential liberation (DL) is modeled with prescribed pressure steps. For each pressure step, a constant composition expansion (CCE) is modeled for the hydrocarbon liquid with, in turn, prescribed pressure steps. For each of the CVD and multiple CCEs steps, flash calculation is performed and stored to build the synthetic database. Using the automatically generated flash data set, ML models were trained to predict the flash outputs using feed composition and pressure. The trained ML models are then integrated with the reservoir simulator to replace the conventional flash calculations by the ML-flash calculation model, which results in a faster runtime and enhanced stability. We applied the proposed algorithms on an extensive corporate-wide database. Flash results were predicted using the ML algorithm while preceded by a stability check that is performed using another ML model tapping into the exceptionally large PVT database. Several ML models were tested, and results were analyzed to select the most optimal one leading to the least error. We present the ML-based stability check and flash results together with results illustrating the performance of the reservoir simulator with integrated AI-based flash, as well as a comparison to conventional flash calculation. We are presenting a comprehensive AI-based stability check and flash calculation module as a fully reliable alternative to thermodynamic-based phase behavior modeling of hydrocarbon fluids and, consequently, a full integration to an industry-standard reservoir simulator.
Reservoir simulation is required to aid in the decision-making for high-impact projects. It is a culmination of geophysical, geological, petrophysical, and engineering assessments of sparse, uncertain, and expensive data. History matching is a process of elevating trust in numerical models as they are calibrated to mimic the behaviour of the real-life asset. Traditional history matching relies on direct parameter assignment based on flat files used as input to the reservoir simulator. This enables a convenient method for the perturbation of uncertain parameters and their value assignments during the history matching process. Given the nature of the input files, the scope for uncertainty parameters is limited to the original petrophysical properties, their derived simulation properties in a specified group of grid blocks, and occasionally extended to include fluid and multiphase flow properties. However, there are key influential model-building steps prior to reservoir simulation related to data interpretation. These steps control not only the values of petrophysical properties but also their spatial correlation, cross-correlation, and variability. The limitation in the scope for parameterization adds bias to the model calibration process, hence negatively impacting its outcome. In an era where ML/AI algorithms are shaping data interpretation methods, key modelling decisions can be revisited to realize the maximum value of subsurface data. However, a framework is required whereby these important model-building steps are captured in history matching to eliminate bias and ensure the geological consistency of the subsurface model during and after history matching. This paper demonstrates a liberated workflow to calculate the recommended parameters that achieve the minimum mismatch score. The workflow is executed through a cloud platform offering compute elasticity to expedite the history matching workflow. It is composed of three main steps. The first step is data loading, where simulation results and parameters are extracted from the submitted ensemble(s). Meanwhile, the second step involves data preparation and cleaning. Wells devoid of data are removed, and scaled metrics are created to calculate the mismatch score. The simulation ID then groups the data to get a field-level aggregation. The now aggregated and cleaned simulation results are merged with the parameters list to create the input dataset to the final step, where several machine learning models are trained and evaluated in parallel. The data is split into training and testing datasets. The target variable is the mismatch score, as the models are trying to predict the mismatch for a given set of parameters. Supervised learning regression algorithms were used. The best-performing ones were found to be random forest and gradient boosted trees. After fine-tuning the machine learning models and evaluating them based on their coefficient of determination (R2 score), the best fitting model is used to calculate the optimized parameters. This happens iteratively by generating new series of parameters within a range and using the machine learning model to predict the mismatch for each until the lowest mismatch is found. The parameters resulting in the minimum mismatch are the recommended parameters. This workflow is implemented on a simulation model built for a mature gas condensate field in the Mediterranean of Egypt. The field comprises three anticlines with a spill-fill Petroleum system, where the majority of the wells are in one of the anticlines. In contrast, the other anticlines have few wells and are candidates for appraisal. Moreover, there is high uncertainty in the sand distribution and reservoir properties, spill points depth, depletion, and observing an explained phenomenon of a sustainable gas water contact in the new anticline even after 30 years of production from the old Anticline. This uncertainty in the understand of the relation between the two anticlines makes the selection of the drilling locations a challenge. To Assess remaining reservoir volumes and identify potential infill targets, we used the ML to study all the uncertainties combinations in a full-loop approach from static to dynamic model and generate multiple representations that honour the geological understanding. The cloud-based Agile reservoir modeling approach enriched with ML / AI algorithms enabled us to generate Multiple realizations that match 30 years of historical production and pressure profiles capturing many possible combinations of uncertain geological parameters and concepts. In addition, several forecast scenarios for 3 new appraisal wells were optimized based on the ensemble of history matched models minimizing the risk of drilling dry wells. In addition to going through the work process and results, this paper highlights the method's practical effectiveness and common issues in practical application. The use of the cloud-based technology had a great cost saving and efficiency improvements, for example giving the existing on-premises Infrastructure would take 1-2 years to achieve the same results that was achieved in 1-2 months and cost saving around 1 million dollars in cluster hardware purchase. Moreover, Cloud based technology enable collaborative, iterative working styles for integrated teams and access to scalable technologies that are developed on cloud only.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.