Desktop distributed computing allows companies to exploit the idle cycles on pervasive desktop PC systems to increase the available computing power by orders of magnitude (10x -1000x). Applications are submitted, distributed, and run on a grid of desktop PCs. Since the applications may be malformed, or malicious, the key challenges for a desktop grid are how to 1) prevent the distributed computing application from unwarranted access or modification of data and files on the desktop PC, 2) control the distributed computing application's resource usage and behavior as it runs on the desktop PC, and 3) provide protection for the distributed application's program and its data. In this paper we describe the Entropia Virtual Machine, and the solutions it embodies for each of these challenges.
The instrumentation of reservoirs and wells using distributed downhole sensor and information- communication systems has enabled significant advances in their management. Examples include monitoring of well integrity and reservoir compaction; production monitoring of artificial lift wells; data integration for short-term history matching, reservoir characterization and geologic model updating; flow rate allocation, inflow profiling, probabilistic production forecasting and downhole set point optimization in intelligent well completions; matrix acidizing and hydraulic fracturing characterization, dynamic estimation of petrophysical properties; dynamic geomechanical properties estimation; joint inversion of distributed downhole fiber sensing and time-lapsed seismic data for anisotropy permeabilities estimation; skin analysis; reservoir and well performance diagnosis; reservoir analysis and parameter estimation, multiphase flow assurance and many more. Expanding the benefits of the distributed downhole sensors is currently driving the need for big data infrastructures and associated dynamic data-driven application systems for reservoir characterization, simulation and management. However, the significant costs of setting up and managing the infrastructure to manage distributed downhole sensing data such as distributed temperature sensors (DTS), discrete distributed temperature sensors (DDTS), discrete distributed strain sensors (DDSS) and distributed acoustic sensors (DAS) is a major challenge. These distributed downhole data sources are characterized with high volume, variety, velocity, veracity, variability and visualization. Currently, the distributed downhole sensing data transfer, storage, processing, archiving, retrieving and interpreting system in the petroleum industry still faces substantial challenges. Some examples are a high cost of hardware and software, ongoing system support and maintenance, a complicated implementation and deployment framework that is difficult to sustain, scale and upgrade, as well as the need for data compatibility provided by different vendors. The objective of this paper is to present a platform which offers an automated one-stop shop for distributed downhole sensing data transmission, management and interpretation. This platform employs a big data infrastructure and allows for joint inversion of production and distributed downhole sensing data in a wide range of online real-time reservoir and well monitoring applications. This paper describes a vendor-neutral, scalable web-based enterprise distributed downhole sensing infrastructure for data exchange, management and visualization. This system also allows for calibration of DTS interrogators and integration with PI systems. This platform applies multi-tier client-server architecture, scalable distributed databases, Production Markup Language (PRODML), and web services technologies to provide a reliable mechanism to bring distributed downhole sensing data from the field site to the corporate network in real-time and enable users to visualize the data anywhere, any time. A framework for cleaning distributed downhole sensing data streams in real-time is developed to render the data produced by sensors usable for analysis (remove problems due to noise, outliers, measurement drifts, incorrect calibration and other issues). Using the distributed downhole sensing data management platform, we combine information from physics-based models with cleaned distributed downhole sensing live data to analyze anisotropy in permeability and skin in multilayer formations, estimate inflow profiles, determine multiplayer formation or petrophysical properties and estimate geomechanical and reservoir compaction properties. This paper demonstrates the capability of the distributed downhole sensor data infrastructure and information integration platform through the use of different sets of distributed downhole sensing data in various applications.
Given the near ubiquity of fiber-optic, information and communication technologies in reservoir and well management, there is a significant need for one-stop shop downhole distributed sensing data analysis methods together with machine learning techniques towards autonomous analysis of such data sources. However, traditional approaches of converting distributed temperature sensor (DTS) data to actionable insights for optimizing gas lift well operations management remain dependent on training based on human annotations. Annotation of downhole distributed temperature sensor data is a laborious task that is not feasible in practice to train a big data classification algorithm for accurate and reliable anomaly detection of gas lift valves. Furthermore, even obtaining training examples for event diagnosis is challenging due to the rarity of some gas lift valve problems. In gas lift well surveillance, it is essential to generate real-time results to allow a swift response by an engineer to prevent harmful consequences of gas lift valve failure onsets on well performance. The online learning capabilities, also mean that the data classification model can be continuously updated to accommodate reservoir changes in the well environment. In this paper, we propose a novel online real-time DTS data visual analytics platform for gas lift wells using big data tools. The proposed system combines Apache Kafka for data ingestion, Apache Spark for in-memory data processing and analytics, Apache Cassandra for storing raw data and processed results, and INT geo toolkit for data visualization. Specifically, the data analytics pipeline uses data mining algorithms to statistically learn features from the DTS measurements. The learned features are used as inputs to a k-means algorithm and then use supervised learning to predict the performance status of gas lift valves and raise alarms based on analytics-based intelligent warning system. The performance of the proposed system architecture for detecting gas lift valve anomaly is evaluated under varying deployment scenarios. To the best of our knowledge, DTS data analytics pipeline system has not been used for real-time anomaly detection in gas lift well operations.
The distributed nature of fiber-optic measurements such as distributed temperature sensing (DTS), distributed acoustic sensing (DAS), and distributed strain sensing (DSS) enables nearly continuous monitoring of the downhole environment in both space and time. Though continuous monitoring opens the door to a rich new set of asset management applications, it comes with its own set of challenges in terms of data transmission, management, and security. Recently, cloud-based fiber-optic data management services have been successfully introduced to the oil and gas industry as an effective way to collect, transfer, store and display distributed measurement data from the downhole environment. To maximize the value of such cloud-based data management systems, and further improve the return on investment for asset managers, the large volume of distributed sensing data collected must be converted to value in a simple and easy-to-use form, depending on different applications. Traditionally, interpretation of distributed sensing data is a manual process conducted by engineers in a post-job workflow. This paper presents the successful integration of an analytics library into the cloud-based fiber-optic data management system. This integration enables real-time, and in some cases near real-time, asset decision making. The design of the analytics architecture is open to meet the wide range of application requirements by asset managers. A few application examples of the analytics integration will be presented using real-time data streamed directly from the field.
Petroleum exploration and production processes typically generate enormous amounts of petro-technical data using sub-surface and surface sensors. The acquisition, transferring, managing, and interpreting of these huge sensor data, as well as the decision making based on it has led to the advent of the digital oilfield phenomenon in the petroleum industry. To achieve improved efficiency, accuracy, and performance, many E & P operators are aiming to apply fiber optics distributed temperature sensing data management technologies to add volume. Currently, high volume distributed sensing data transfer, storage, processing, archiving, retrieval and exchange system in the petroleum industry still face big challenges such as high cost of hardware and software, complicated implementation and deployment framework that is difficult to sustain such as scale and upgrade, as well as compatibility for data provided by different vendors. An efficient online real-time elastically scalable system that enables fast retrieval from big data infrastructures is therefore essential. This paper describes a scalable web based enterprisefiber optic infrastructure for data exchange, management and visualization. This platform applies multi-tier client-server architecture, scalable distributed databases, PRODML (Production Markup Language), and web services technologies to provide a reliable mechanism to bring fiber optic data from the field site to the corporate network in real-time and enable user to visualize the data anywhere, any time. The support of PRODML industry standard make it vendor neutral and allow data exchanging from different systems and sharing data among users and different applications. The distributed Cassandra database enables the scalability to handle the fiber optic big data in a high performance and efficient way. Finally, the global inventory management system allows keeping track of changes to the asset and the instrumentation configuration over the life of the distributed sensor systems, as well as the ability to correlate the measurement data to the proper asset configuration. A case study is presented that demonstrates successful field testing to verify the functionalities of the newly developed system for high data volume distributed sensors. Specific attention is given to many advantages being offered by this new framework over existing ones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.