Competitiveness in the Oil and Gas (O&G) sector has required high technological investments for datacentric decisions. One of the trends is the adoption of Digital Twins (DTs), which use virtual spaces and advanced analytical services to monitor and improve physical spaces. Central to the interconnection of these systems is a Data Fusion Core (DFC) component, which provides data management capabilities. Although the literature has proposed data management functionality in the scope of specific O&G DT applications, different joint efforts towards standardization can be found to deal with data integration and interoperability in the industry. The Open Subsurface Data Universe (OSDU) data platform is an initiative by several partners members of The Open Group consortium created to eliminate data silos in the O&G ecosystem and leverage innovation through a data-driven approach. In this article, we look at the convergence of this effort in providing data management functionalities for digital twins, highlighting strengths, gaps, and opportunities. We investigated the extent to which the OSDU data platform meets the needs of a DFC implementation, with a focus on interoperability, integration, governance, and data lineage. We also propose additional resources for data management in this context, namely data enrichment, workflows, and data lineage. Our main contributions are: (i) analysis of possible data management capabilities for creating a working DFC for an O&G DT and (ii) initial ideas on the complementary role of OSDU data representation and ontologies and how this semantic enrichment can be leveraged in a DFC of a DT.
A competitividade na indústria de óleo e gás tem exigido altos investimentos tecnológicos para decisões centradas em dados. Uma das tendências são os Gêmeos Digitais, que se valem de espaços virtuais e de serviços analíticos avançados para monitorar e aprimorar os espaços físicos. Um Núcleo de Fusão de Dados (NFD) inter-relaciona estes sistemas. A plataforma de dados OSDU é uma iniciativa de vários parceiros para eliminar silos de dados no ecossistema de petróleo e alavancar inovações através de uma abordagem orientada a dados. Neste trabalho, analisamos em que medida a plataforma de dados OSDU pode atender às necessidades de implementação de um NFD, com foco em interoperabilidade, integração, e linhagem de dados.
Monitoring and forecasting oil and gas (O\&G) production is essential to extend the life of a well and increase reservoirs' productivity. Popular models for O\&G time series are ARIMA and LSTM recurrent networks, and tipically several lags are forecasted at once. LSTM models can deploy the recursive prediction strategy, which uses one prediction to make the next, or the multiple outputs (MO) strategy, which predicts a sequence of values in a single shot. This work assesses ARIMA and LSTM models for the forecasting of petroleum production time series. We use time series of pressure and gas/oil flow from actual wells with distinct properties, for which we developed predictive models considering different time horizons. For the LSTM models, we deploy both the recursive and MO strategies. Our comparison revealed the superiority of LSTM models in general, and MO-based models for longer time intervals.
The Internet of Things (IoT), personal and wearable devices, and continuous advances in data-gathering techniques have significantly increased the amount of relevant data that can be leveraged for innovative real-time, data-driven applications. Digital Twins (DTs) are virtual representations of physical objects which are fully integrated and in which the automatic data exchange occurs in a bidirectional way. DTs and big data are mutually reinforcing technologies since huge volumes of data representing the physical/virtual worlds are collected, transformed, and generated through models to aggregate value to the business. Modern DTs follow a five-component architecture, which includes a Data Management (DM) component that bridges a physical system, a mirrored virtual one, and services components. However, there is no clarity on the functionality required for the DM component. This work presents a Systematic Literature Review on DM issues and proposed solutions in the DT context. We analyzed DM under the big data value chain activities, highlighting key issues to be addressed: data heterogeneity, interoperability, integration, data search/discovery, and quality. In addition to surveying existing solutions for handling these issues, we contextualized them in the domain and function for which the DT was proposed, the type of data dealt with, and the technical infrastructure. The compilation of these solutions sheds light on the functionality of the DM component in a DT, trends, and opportunities. Our main findings revealed that the maturity level assumed for the DM component is at an early stage. The most mature solutions were proposed for the industry domain, and many of them assume humans as the ultimate information consumers. Data integration is the prevalent DM issue addressed due to the bridging role of the DM component, and cloud computing is the key implementation technology. Among the research opportunities are reference data management architectures, adoption of industry standards and ontologies, interoperability among distinct DTs, the development of agnostic standard implementations, and data provenance mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.