This paper represents an integration of artificial intelligence and lean sigma techniques to achieve large field production optimization.The first part of the methodology (detailed in SPE 90266 "Zonal Allocation and Increased Production Opportunities Using Data Mining in Kern River"[1]) involves data management and predictive data mining for increased production opportunity identification.It utilizes a set of data mining tools including clustering techniques and neural networks to identify new candidates for clean-outs, perforating, sidetracks, deepening, and other types of workovers.Furthermore, the expert system was used to predict the estimated production increase for these candidates.The second part of the methodology optimizes the implementation and post-workover follow up of the opportunities identified in part one.It involves the use of lean sigma tools such as value stream mapping, level loading, continuous flow production, standard operating procedures, and kanbans which optimize execution cycle time, peak oil production, decision making process, cost, and safety[2].This approach was successfully applied and executed in the Kern River field. Introduction With over 8,600 active producers averaging 10 BOPD each and a limited staff, streamlining the well optimization process in the Kern River field is critical to take advantage of a large and dynamic portfolio of relatively low oil gain opportunities.It is essential to effectively identify, prioritize, and implement a high number of these opportunities, which typically range from 2 to 8 incremental barrels of oil per day. As detailed in SPE paper 902661, a significant production increase opportunity was discovered in the lower sands through the use of artificial intelligence tools after observing that some wells in the field have high production, while nearby neighbor wells are very low producers.A pilot program was implemented and following its success, the study was extended across the entire field.After identifying the field-wide opportunity, a significant workover program was launched. A lookback on the pilot program indicated several processes, including candidate selection, were successful and would continue to be used "as is" in the execution of the field wide effort.The post-workover follow up and put on production (POP) processes, however, were identified as weaknesses and were highlighted as areas of improvement.Lean Sigma techniques were selected to optimize and streamline these processes. Background This paper represents an integration of artificial intelligence and lean sigma techniques to improve workflow processes and execution of a large field optimization project in Kern River. Reservoir Description.The Kern River field, located in Kern County, California, is a heavy oil reservoir consisting of nine productive sand intervals and many more individual sand lobes or flow units within the Kern River series.The field is 4 miles by 5 miles in areal extent and has over 8,600 active producing wells and 1,200 steam injectors.Producers are co-mingled with very little individual zone production test data available.The field is currently produced by steam injection with varying degrees of thermal maturity in each of the sands.The primary production mechanism is gravity drainage with extremely low average reservoir pressure of 20 psi in the oil sands, requiring pumps to be set at or below the bottom-most oil sand and pumped off to effectively produce. The northeast half of the field has little to no water impacted sands, while the lowermost sands in the central portion of the field are water/aquifer impacted.The water impacted sands are found progressively higher moving southwest, down structure, across the southwest half of the field.Higher pressures exceeding 50 psi are found in these sands.
In recent years, the application of Big Data technologies and associated analytics has enjoyed significant attention and has seen an enormous growth in the oil industry. This is due to exponential growth of both historical and real-time data being collected. In this article, we present examples of the successful application of these technologies for the management and optimization of large heavy oil reservoirs. These particular reservoirs present a very interesting, yet complex, challenge given the steam assisted gravity drainage recovery mechanism and the thousands of producers, injectors, and observation wells which generate terabytes of data on a daily basis. To handle all this large and high-dimensional information efficiently, we introduced new workflows consisting of operational domain data acquisition, data transfer to business domain, storage in accessible repositories, and ending with data consumption, quality control, visualization and analytics. The paper summarizes how big data is aggregated into smart applications to monitor the reservoirs and observe steamflood recovery development. Examples of high definition DTS data integrated with completion, wellbore equipment, geologic markers, and real-time wellhead pressure showing reservoir heating and/or cooling are presented. Intelligent visualization tools and analytics, such as pattern recognition applied on static and dynamic subsurface data enabled superior heat management. A second example of high efficiency operations enabled by big data is the well integrity during cyclic steam operations. Pressure and injection rate data streams are integrated with analytics to monitor and identify abnormal operating conditions. Lastly, an example of facility reliability and production impact quantification is demonstrated through use of integration of surface system stream data and subsurface well information. The later led to significant business impact in terms of realized production and operation cost savings. The examples presented in this paper demonstrate the business impact and value creation generated by efficient use of the Big Data technologies and associated analytics for heavy oil reservoir management and optimization. The workflows are now used in all Chevron heavy oil fields in San Joaquin Valley.
TX 75083-3836, U.S.A., fax 01-972-952-9435. AbstractThis paper presents a fast and effective methodology to estimate zonal allocation for commingled producers in a multilayer reservoir using minimal, readily available data (well completion, historical production, sand depths, and location data).A set of data mining tools including regression, neural networks, and fuzzy logic was used to identify candidates for remedial work and the corresponding production increase expected. This approach was applied and executed in a portion of the Kern River field in California with very promising preliminary results.
In an effort to better understand the well performance in one of the Chevron's assets in San Joaquin Valley, a study was conducted to evaluate the perforation strategies and capture best practices. Well completion through perforation is typically performed using bare essential technology such as wireline logs and perforation guns. For basic reservoir formations simple rules-of-thumb are used for perforation spacing and interval lengths. These are rarely validated by other methods, such as production logging and micro-seismic monitoring. For more challenging lithology, a more appropriate approach would be to place perforation clusters in target formations with similar properties.The research paper presents an efficient use of fuzzy clustering technology for identification of optimum perforation strategy in a challenging waterflood diatomite reservoir. The methodology was applied on all newly drilled wells in the reservoir (within the last two years) and found that this new approach improved our understanding over previous practices, not only by designing optimum perforations but also an increased production was observed. Cluster analysis is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other (in some sense or another) than to those in other clusters. There are two commonly used types of clustering methods: hard and fuzzy clustering. In hard clustering, data is divided into distinct clusters, where each data element belongs to exactly one cluster. In fuzzy clustering, data elements can belong to more than one cluster, and associated with each element is a set of membership levels.The fuzzy clustering algorithm, also known as Fuzzy C-Mean (FCM) algorithm, was applied to log data of wells from different areas of a reservoir. Based on the clustering results, the workflow then identified whether the perforation was performed on "good" regions (sand) or on "bad" regions (shale bedding). This information allowed the evaluation of perforation jobs executed and allowed capturing best practices and design changes for future well completions.The case study presents a simple, yet efficient workflow to extract additional information from logs and improve completion strategies and perforation design. The methodology is flexible and can be applied to any well where complex lithology creates a challenge in defining the optimum perforation intervals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.