Water affects almost every operation in the exploration and production (E&P) industry, with its properties important to flow assurance, to three-phase flow pressure/volume/temperature (PVT) modeling, and for fluid compatibility purposes across well construction, stimulation, and production operations. Until now, time-intensive laboratory tests or cumbersome third-party simulators were required to extract physicochemical properties. Here, a family of machine-learning-based reduced-order models (ROM), trained on rigorous first-principle thermodynamic simulation results, is presented. Approximately 90,000 representative produced-water samples were generated using the United States Geological Survey (USGS) Produced Waters Geochemical Database (Blondes et al. 2019), with systematic variation of the concentrations of 14 common ions. A training data set of 1 million rows was constructed, further varying temperatures and pressures using broad ranges (50-400°F and 14.7-20,000 psi). Thermodynamic simulations were used to generate a data set with more than 500 parameters, including speciation; physicochemical properties such as density, thermal conductivity, heat capacity, and salinity; and notably, the scaling potential for 11 common oilfield scale-forming minerals. More than 20 machine-learning algorithms were screened using cross-validation, and boosted decision trees were found to provide the best accuracy. The CatBoost algorithm (Prokhorenkova et al. 2018) was selected and further optimized. Model validation using unseen data showed relative errors of less than 1% for the majority of predicted properties, which is remarkable for such a complex multicomponent and multiphase system. Simulation details, modeling, and validation results are discussed. Trained and optimized ROMs can be incorporated in any workflow that depends on water property predictions. As a demonstration, a web application, Water Digital Avatar, was built from these ROMs to quickly and accurately process predictions of water properties and scaling potential on the basis of the entered water composition and desired conditions. The streamlined workflow provides users with model predictions in tabulated and graphical forms for analysis within the web application or offline by means of a downloaded spreadsheet. The developed ROMs that predict water properties enable automated decision making and improve water management workflows. The presented approach can be further extended to other oilfield, chemical, and chemical engineering applications.
Fast and reliable field-portable chemical analysis of produced waters remains one of the main challenges of scale-risk mitigation, as it enables timely control over scale inhibitor type and dosage. While many analytical methods are potentially applicable to produced waters, most of them lack reliability under field conditions or do not meet increasingly tight cost requirements. X-ray fluorescence (XRF) spectrometry is routinely used in the oil field for analysis of cores, drilling cuttings, and muds, as it provides quick noninvasive detection of many elements simultaneously. Inexpensive portable handheld analyzers typically feature limited element range, sensitivity, and resolution, and are therefore expected to be inferior to benchtop analyzers. The question is whether handheld devices can offer suitable detection limits and accuracy for produced water analysis, despite these limitations. At the same time, one of the major challenges of XRF, the so-called matrix effect, is strong in produced waters and also negatively affects the analysis accuracy. This study demonstrates how multivariate machine-learning (ML) techniques can be applied to the full XRF spectra recorded with a handheld analyzer. ML spectra processing is shown to successfully mitigate matrix effects and enable simultaneous quantification of all ions of interest. Interestingly, key physical (density) and chemical (total dissolved solids and hardness) properties of produced water can also be quantified using ML techniques. In the paper, the experimental protocols are described first, followed by a detailed discussion of the data workflows, which covers the XRF spectra preprocessing, algorithm selection and tuning, and independent validation procedures. Over 50 different ML algorithms are trained on different spectra ranges of a multicomponent calibration dataset, and the three best models are applied to several real-life produced water sample sets for validation. A rigorous error analysis is performed for all ML models. In field samples, the resulting analysis errors (RMSE) are less than 100 mg/L for barium and strontium, less than 150 mg/L for sulfate, and remarkably small/accurate for other ions and properties considering measurement with a handheld device.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.