Oil flow rate testing is a crucial concept in oil fields where several methods facilitate well rate testing and measurement. Hundreds of multiphase flow meters (MPFMs) have been utilized to enhance the accuracy of testing measurement and provide reliable data for all fields. Even though these meters are of paramount importance, they require frequent preventive maintenance, calibration and manpower. In this paper, an Artificial Neural Network (ANN) model is developed as a backup tool to replace MPFM measurements when the device becomes defective or inoperable. Several correlations have been established to facilitate oil well testing at minimal cost; relying on surface production parameters to allow enumerating the oil flow rate without installing expensive equipment. An ANN model was developed using real-time wellhead parameters measured from equipment installed at the surface to designate properties and characteristics per reservoir. The ANN model was calibrated, tested and validated to achieve the most accurate results. The model was further optimized to attain a reliable tool for real-time rate estimation. In this paper, assessment of various correlations was conducted to compare the accuracy of each ANN-related empirical equation in five different datasets. The assessment also covers a wide range of data. More than thousands of data points from MPFM were compared to Towailib, Marhoun and Gilbert correlations, and showed highly deviated values with an average relative error of more than 40%. The same sets of data were tested using the newly developed optimized ANN model. The results from the model resulted in an average relative error of 3.7% compared with the MPFM rate measurements. Therefore, the new ANN model presented in this paper shows highly accurate results. The developed model contributed to enhancing testing efficiency and optimizing production. Indeed, utilizing this model is an essential practice for production engineers to validate well tests if prompt outcomes are desired and another reliable tool to estimate real-time rate production when metering device is down.
Analyzing large amounts of continuous real-time data along with sustaining its reliability is a challenging task for engineers. Low reliability of data can prompt off base data examination results and make it difficult to make critical decisions in the oil and gas business, especially for the cases where the processing of data is continuous real-time data. One example is the continuous real-time data coming from Intelligent Field equipment. Intelligent field equipment for upstream can be a combination of Wellhead sensors, ESP, PDHMS, MPFM, SWC, SPFM, MOVs and H2S sensors. Intelligent Field equipment data follows specific transmission nodes as it flows from each sensor or instrument to a Remote Terminal Unit (RTU), to the Supervisory Control and Data Acquisition (SCADA) system, to a Plant Information (PI) data historian, and after appropriate filtration into the Exploration and Production (E&P) Corporate Database (Oracle), and finally into Petroleum Engineering developed applications. The unreliability of data can occur at any transmission nodes along this path. To discover the root cause of the unreliability, the Engineer must go through an exhaustive and lengthy process. Therefore, this paper introduces a promising methodology that tackles this challenge, by utilizing machine learning methods, to develop pattern recognition algorithms that recognize the transmission nodes where the unreliability of data appeared. Therefore, this paper aims to tackle two main challenges:Estimating data reliabilityDetecting the error location of unreliable data By utilizing Machine Learning methods to develop pattern recognition algorithms that recognizes the transmission nodes where the unreliability of data appeared. Two out of three machine learning algorithms were selected; Decision Tree and Gradient Boosting (GB). Decision Tree showed an accuracy of 94.6%, while Gradient Boosting showed higher accuracy of 96.4% when estimating data reliability and error location determination. The robustness of GB over decision tree is that GB construct and train trees all at once while the latter train each tree independently. Training all trees at once helps to learn faster and reduce the error dramatically.
Nowadays, with the advanced technology, there is a large quantity of real time data that flows from the equipment in the field to the engineers’ desktop. The quality of data is, in most cases, questionable. A high quality data is required to be utilized in production workflows and technical studies. Failure to acquire reliable data will affect the calculations of production parameters, hence impacting the overall understanding of well performance. Therefore, monitoring data reliability and quality is essential. A project was initiated to tackle all various reliability issues for eight fields where a focus team was formulated. The team assessed the current data reliability and facilitated developing the action plans with the use of Lean Six Sigma concept which follows five phases; define, measure, analyze, improve and control. Multiple tools such as fishbone diagram and 5-WHY were used to identify the root causes for having reliability issues along with a correspondent solution (s) which aided developing a detailed implementation plan. The project goal is targeting an increase in overall data reliability in a six-month period. An anticipated increase of 5% to overall data reliability is to be achieved post to the tags deletion and re-mapping campaign. It is worth mentioning that the utilization of production workflows through effective monitoring of wells rate compliance and ESP is associated with remarkable cost saving. Securing high data reliability from various equipment will enable engineers to track the wells’ rates and status at their desktops. Moreover, effective monitoring of ESP performance will help preventing the occurrence of trips and optimize ESP operations in the field. Last but not least, effective data monitoring will ensure the upkeep of the Intelligent Field equipment.
The process of validating and monitoring pressure and temperature data is a key element in production engineering as it ensures proper well evaluation. Consequently, wells are frequently surveyed for better reservoir monitoring and accurate measurement of productivity. This study explores a validation method using advanced Artificial Intelligence (AI) and Machine Learning (ML) classification models that were developed utilizing historical data to automatically validate conducted pressure and temperature measurement and communicate observations and alerts to engineers. The proposed method validates pressure and temperature measurement using ML model based on previously conducted measurement using advanced algorithms. The developed model fed on pre-identified key production and pressure/temperature parameters that are used to classify surveys. Moreover, these parameters were selected based on historical data and measurement reports and then were analyzed and ranked to identify the most important parameters on the performance and accuracy of the model utilizing advance algorithm and correlation analysis. This is to predict and classify test measurement via the utilization of a non-linear relationship through the use of data-based analysis alongside physics-based analysis. The data set of conducted pressure and temperature measurement was split into two groups i.e. training and testing. In addition, a K-fold cross-validation was performed on the training set to validate the performance of all considered and selected ML models. The results of each ML model were then compared for accuracy and the Random Forest Classification algorithm was selected. The developed classification model achieved an overall accuracy level of more than 95%. Validating and testing the model on several cases showed promising results as irregularities are detected in advance before engineers evaluate these conducted measurements. The developed model enabled an effective utilization of previous measurements to validate newly conducted ones and, consequently, alert engineers of any detected anomalies in advance. This yielded significant impact on cost and time savings due the model's ability to automatically predict and validate the conducted measurements. The pressure and temperature validation model enhanced monitoring and interpreting the production/pressure and temperature measurements and resulted in a substantial improvement in timesaving. The model is developed to be run on the Cloud and it provides an automatic validation of the newly conducted measurements. In addition, it also delivers an alerting mechanism to engineers for any observed abnormalities.
Knowledge management is the process of defining, structuring, storing, and sharing knowledge and experiences of employees within an organization to increase workplace efficacy and improve general decision-making capabilities. This paper aims to shed light on a structured production engineering knowledge management program and its initiatives for enhancing an organization's performance with associated factors such as learning from remote locations and the availability of subject matter experts. Knowledge management structure has three main pillars: organizational culture, governance, and technology, which are all integrated to obtain an effective production engineering knowledge management structure. The cultural pillar is captured by formulating a two-year plan and implementing knowledge management process requirements, benchmarking the best practices, and revising the target on a regular basis. Technology is captured through the production engineering community of practice, where people gather in one place to share knowledge and best practices. Governance has a structured architectural plan where the key performance indicators of production engineering knowledge assets and events are monitored regularly. The Knowledge Asset Index was exceeded due to the implementation of several initiatives such as the incubation of specific instruction manuals for Intelligent Field equipment, which aims to address layouts of roles and responsibilities involving all concerned organizations to maintain healthy Intelligent Field equipment. For the Knowledge Events Index, the organization's subject matter expert conducted an in-house Intelligent Field training course and technical publication writing workshops to improve all engineers' awareness and training for more than 200 professionals. With an excellent implementation of the production engineering community of practice plans, the knowledge management team won an award due to a remarkable participation index increase. The results show that well-planned knowledge management has a number of advantages to an organization. Some of these benefits include increased workplace efficiency, which allows consistency in information provided to knowledge recipients, improved skill growth and development in employees, and improved decision making in the organization. This paper will serve as a motivation to knowledge management structures by implementing knowledge performance measures pertaining to production engineering. The benefits of this process to future quantifications are that it allows even quicker decision-making skills in the workplace and works to reduce organizations' training times, which can result in bridging knowledge gaps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.