Nowadays, there is a significant number of reservoir models that are purely constrained to numerical reservoir perception. Moreover, such models neglect the influence of geological events that are essential in characterizing and modeling carbonate reservoirs. This mentioned approach leads to conceptual errors because ideally, a reservoir model would enable heterogeneities predictability and mitigate reservoir-modeling uncertainty. The objective of this paper is to show the results obtained from an iterative integrated reservoir rock-typing characterization process for its subsequent implementation into reservoir models. This multi-disciplinary study emerged as the fundamental pillar to model the Lower Cretaceous Thamama Group in a major Oil Field in the Arabian Plate. This rock characterization approach intends to define rock types (referred as Static Rock Types (SRTs) in this document) which result from the combination of two sub-processes, the Petrophysical Synthesis and the Geologic Synthesis. The latter aims to define Facies groups by relating depositional facies and their associated diagenetic processes. On the other hand, the Petrophysical Synthesis proposes to define petrophysical groups based on a combination of similar petrophysical characteristics. Ultimately, this rock-typing approach enables generating Static Rock Types defined by the reconciliation of related geologic and petrophysical patterns. The data inventory for this study includes detailed Core Description, RCA, MICP and Log data. Applying consistent data quality validation, which allows implementing a robust workflow combining deterministic methods and machine learning supported algorithms for data analysis. Static Rock Types (SRT) were classified through distinctive sets of geologic and petrophysical groups. This classification resulted in four SRTs, SRT1 exhibiting enhanced reservoir properties product of early diagenesis, SRT2 is dominated by neutral diagenetic processes that preserve reservoir properties, SRT3 and SRT4 are both associated to late diagenetic property reducing processes that distort arrangement of minerals and pore structure. The major achievement of this rock- typing approach resumed in the integration of the Geology and Petrophysics. This integration enables finding significant evidence to understand reservoir properties at depositional stage, properties alteration product of diagenetic processes and reservoir dynamic behavior links to a geologic concept. This rock-typing approach changes the previous approach, used in the first generation model to characterize a particular reservoir; which was limited only to the classification of petrophysical patterns; instead, this approach allows associating a petrophysical pattern to a singular geologic facies, feature or event. Ultimately, via the integration of dynamic and static data, reservoir models become more predictive. Similarly, the basis of the rock-typing approach presented herein brings together a solid static understanding in order to delineate the origin and reasons behind the dynamic behavior of a particular reservoir. This fit-for-purpose approach built from the premise of integration provides a complete basis for reservoir simulation, management, and forecasting; and at the same time contributes reducing reservoir uncertainties by means of enhancing heterogeneities predictability, dynamic flow understanding, which all combined yields organically into optimized field development strategies.
This paper describes the deployment of Autonomous Inflow Control Valve (AICV) technology in an oil producing well affected by gas breakthrough, to reduce gas production and increase conformance. Based on the fluid properties, AICV differentiates the fluid flowing through it and can autonomously choke / shut off gas inflow from the high gas saturated zones, while allowing oil production from healthy oil-saturated zones. The subject oil producing well has open hole section of over 3200ft. A multidisciplinary data integration of well logs, production history, and subsurface geological description is considered for modelling and designing optimum AICV completion. The main objective is to restrict the gas breakthrough to a smaller compartment, allowing other compartments to produce at higher oil production. The AICV completion was run with a remote actuated shoe to enable fluid circulation from the end of downhole completion string while run-in-hole. AICV technology allows pro-active reservoir management. It shuts-off the gas at subsurface level autonomously, without any intervention. Due to the chocking effect, AICV allows sustain the well production within the reservoir management guidelines, with improved well availability and reducing the operating expenditures. This has an additional positive impact on the environment due to reduction of gas flaring as AICV is expected to reduce the gas production/GOR by 81% This paper discuss in detail how the AICV completion offered a technically attractive and cost-effective gas management and production optimization opportunity. Futures implementation of this solution will improve Miscible Water Alternating Gas (MWAG) injection efficiency and gas recycling in addition to reducing the carbon footprint per barrel of produced while sustaining production.
The phrase unstructured data usually refers to information that doesn't reside in a traditional row-column database. The larger part of enterprise data nearly 80 %, is unstructured and has been much less accessible. From email, text documents, study reports, presentations, memos, to audios, videos and more, unstructured data is huge body of information. This paper proposes a work in progress model to deal with unstructured data management. In any E&P company, there is data lying in unstructured format including, local drives, network drives, share points, emails, etc. Data sensitivity plays an important role in classifying the data. Irrespective of the classification, it still holds a valuable information, which can be used for predicting business problems in analytical way. The way knowledge is shared among business through email, attachments, flat files, presentations, it requires a robust system/solution to manage the unstructured data. One of the examples could be, related to decision making. Business decision making happens over email or phone calls. There is a huge knowledge potential that exists in the emails of the business. There is a need to extract this information in a way that, it can be utilized in future for analytical decision making. Duplication is an important aspect of unstructured data managed which needs to be tackled. If we scan the current system, we can find various copies of same document, lying at different places in the organization. Same data keeps on circulating among the business users, thus causing the duplications. By having a system that controls the duplication of unstructured data in a meaningful way, will be beneficial for the organization. With the ongoing advancements in Machine learning and Natural language processing with combined analytical tools, time has come to extract value out of unstructured data. The proposed method will be to identify, gather and classify the unstructured data. Create and use a content management tool to organize and manage the unstructured data.Create a standard engine to deal with unstructured data, without having to convert it to structured data format. Apply an analytical engine at the top of this content and do prediction on the data. Whenever a new data comes into the content management, it gets ingested into the prediction analysis tool to assist business in decision making.
Subsurface data integrity is a complex and critical task in any E&P company. It is starts from accurate planning then acquisition, processing, storing and further implementation for generating corporate value. This cycle time of data incorporation process should be quick to have maximum information before taken investment decision. Any delay of data incorporation considered as value loss. The objective is improving of existing process of data submission for reduction a total time of submission cycle. The process optimization of data submission done by following basic principles of Six Sigma methodology. This method has five main sequential steps: defining, measure, analyze, improve and control. The problem definition selected based on data submission KPI failure, such fact accepted by all stakeholders and included in project charter. The next step includes detailed measure and establishing an existing workflow. It helps to identify possible failures cases based on end-users group sessions and followed by identifying root causes. After that, solutions generated during brainstorm sessions and control mechanism established to assure the execution of process improvement implementation. By applying a systematic approach and gathering all stakeholders from different Company business units allows optimizing a multifunctional process of data delivery to improve corporate KPI and preserve value of the data. There are several solutions generated for implementation, developing an automated corporate tracking system for log data submission focus on timely awareness of stakeholders for required actions, reviewing an organizational role and responsibilities for each subunit to streamline communication channel. Another improvement related to frequent service quality meeting to monitor data delivery status and align between all involved parties like end user, contract team, data management and vendors. The last but not least solution focus on amending of existing service level agreement to capture missing technical features of logging data acquisition and interpretation at different steps of data submission (Figure 1). The economic effect is a combination of two parts, an additional production of newly drilled wells due to timely update of a reservoir model and avoiding a liquidated damage as result of late data access for the end users. An improvement of a workflow or business processes based on consistent Six Sigma method allows success implementation of technical SOP within the Company with broad organization chart. The benefit of initiation a process optimization project by technical end user is always aiming the existing failure in the process by solving which gain the entire company value.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.