The paper presents a practical tool for hydraulic fracturing efficiency evaluation. The tool is based on a data-driven approach that helps in interpreting real-time data. Based on the hydraulic fracturing (HF) job monitoring, statistic metrics and key performance indicators (KPIs) are generated to be valuable input for further designs and identification of potential savings in operation. Machine learning (ML) algorithms are proposed to reduce the tedious work of completion engineers by automatically classifying each treatment schedule's timestamp and assigning the stage label. For operation stages classification Support vector machines and neural networks algorithms are used. These models are trained and evaluated on real-time treatment datasets. After automatic stage recognition, relevant statistic parameters are calculated, enabling advanced data analytics. Detailed analysis of historical data allows to identify the areas for improvements and set new best practices. The first research objective was to gather data from various companies and structure them under the same template to conserve the most critical information gained during the hydraulic fracturing job. Afterwards, the data are preprocessed and labelled by using signal processing routines that significantly decrease the labelling time. The labels or classes are used to define different stages that can be distinguished during the treatment. Finally, the goal is to decrease the necessary time for data labelling. Therefore, two multiclass classification models (Support Vector Machines (SVM) and Neural Network (NN)) are built and evaluated. Based on evaluation metrics, both models resulted in high accuracy and reliable results. However, the SVM model resulted in slightly higher accuracy and an F1 score. The key value of these models is that they provide a computational method to extract a pumping schedule from hydraulic fracturing time-series data automatically. Also, these models allow conducting post-job analysis and choosing the proper pump schedule for a future HF treatment based on previous experience. This past-job analysis could contribute to the effectiveness of future operations by utilizing the materials and fluids more efficiently.
SICLO (Source of data and information; Input data; Calculation/Analytic; Logic Analysis; Output/Value Delivery) methodology is an innovative concept for smart diagnostic, reservoir/well performance optimization and estimation of remaining reserves based on the integration of Petroleum Data Management System (PDMS) and expert rules. Implementation of SICLO methodology provides the best strategy on how to produce remaining reserves most profitably. PDMS is the foundation of SICLO methodology and provides structured and verified information that follows the Well Life Cycle. Within PDMS, data are organized and structured according to clearly defined principles and rules and filtered by different levels of quality control. Structured data allows integration of production and reservoir information with real-time data to achieve the maximum level of diagnosis of system operation performance according to reservoir and well potentials and system constraints. The built-in workflows and architecture of the whole process are automated and make the task accomplishment faster. SICLO methodology integrates expert-driven knowledge and pattern recognition tools improved by data-driven, artificial intelligence, neural network, and fuzzy logic technologies to deliver adaptive solutions for identifying locations of remaining reserves, optimizing oil and gas production, and minimizing associated operational costs.
The well geometries with a shallow kick-off point in conjunction with surface infrastructure limitations have led to Electrical Submersible Pump (ESP) technologies' application as one of the most suitable artificial lift methods for the harsh reservoir conditions. However, the harsh reservoir conditions in terms of the low reservoir pressure, high reservoir temperature, scaling problems in various forms, and high gas content at the pump intake have reduced the ESP system run life. Therefore, this research represents the Autonomous Adaptive Algorithm (A3) as a holistic approach to integrate analytical and machine learning models to assist production engineers in the early detection of operating problems. The A3 relies on different data sources and uses unique, well diagnostics logic to generate valuable features and prepare data for training. Finally, the paper evaluates different classifiers and explores the possibilities of application A3 as a flexible edge solution. The research benefits will be demonstrated for several problematic ESP wells.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.