Bottom hole pressures are valuable source of information for reservoir surveillance and management and are the heart of reservoir engineering. Real – time pressure measurements record pressure data at 5 second interval resulting in enormous accumulation of data. The size and volume of the accumulated data limit the capability of existing analysis software to load and interpret data. This paper presents an improved methodology for data quality checking and data optimization in determining reservoir pressure depletion via Autoregressive Integrated Moving Average (ARIMA) and Decision Tree Model. Dataset was gathered from a representative reservoir from Malay Basin. The ARIMA algorithm presented was designed for quick and efficient data quality checking. The Decision Tree Model in other hand was utilized to select maximum buildup pressure for reservoir depletion point via well status parameters. The maximum pressures were selected from buildup up data when the decision tree conditions were met. Versus classical methods, the algorithm has obtained around 90% similarity. The resulting data were then can fully optimized for reserve reporting and forecasting study i.e. analysis and numerical simulation. The paper also reports on the advantages in the application of ARIMA – Decision Tree Algorithm in pressure surveillance revealing few key advantages namely minimize the need of well intervention and optimized workflow for reservoir engineer to view, utilize, and detect reservoir depletion data. ARIMA – Decision Tree Algorithm is targeted to be installed and integrated in field historian for better overall data analysis and visualization. Results produced from the ARIMA – Decision Tree Algorithm which consist of reservoir pressure depletion data will then improve more advance analysis such as simulation and forecasting in terms of overall speed and accuracy. As a conclusion, this paper presents the importance and application of incorporating Big Data Analytics Algorithm in reservoir management and reporting. Future work, deliverability calculations can be incorporated in the model to identify and rectify any abnormal reservoir behavior.
Oil and gas industry have evolved towards digitalization and data are fully utilized for decision making, cost optimization, improve in efficiency, and increase productivity. Upstream sector in oil & gas produce a huge number of operation and production data in a real-time platform. It is tedious process that somehow impractical and inefficient to quality check and analyze all available data manually (Subrahmanya et al., 2014). By using machine learning algorithm, this can be improved to automate data quality check at scale. On top of that, imputation can also be implemented to substitute on missing data and future forecast in real-time. In a case of this study, a huge data was collected from more than 30,000 tags/sensors in real-time. The real-time data were collected up to seconds and quality check need to be done up to each data collected. Firstly, each equipment tags/sensors had been checked and arranged with P&ID drawing. Then, API was developed with the real-time platform. In this project, percentile of machine learning was applied and developed to quality checked the operation and production time-series data at scale. Lastly, the process was customized to other offshore platforms in the field. In addition to automated data quality checking, machine learning algorithms were also used to calculate missing information based on the underlying relationship between data points. These approaches would reduce time needed to maintain quality and reliable data for further analysis and usage. As a result, percentile in machine learning successfully automate the process of data quality check for more productivity and efficiency. The percentile was applied to understand, validate, and monitor data at scale. Anomalies were detected in real time that allows operators to analyze further on any possibility in faulty, damage, or loss. All the outliers, missing or wrong data were also recorded and visualized in a dashboard. The model also provides additional statistic to define stale and bad data on top of automated define parameters. These features have improved efficiency of data acquisition and preparation. As conclusion, the model assists operator in monitoring daily operation and production data efficiently. Data quality and reliability is the key factor in asset management to ensure operator trust on produced data. The quality checked data could be utilized for further analysis, troubleshooting, and decision making.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.