Summary The optimization of field development plans (FDPs), which includes optimizing well counts, well locations, and the drilling sequence is crucial in reservoir management because it has a strong impact on the economics of the project. Traditional optimization studies are scenario specific, and their solutions do not generalize to new scenarios (e.g., new earth model, new price assumption) that were not seen before. In this paper, we develop an artificial intelligence (AI) using deep reinforcement learning (DRL) to address the generalizable field development optimization problem, in which the AI could provide optimized FDPs in seconds for new scenarios within the range of applicability. In the proposed approach, the problem of field development optimization is formulated as a Markov decision process (MDP) in terms of states, actions, environment, and rewards. The policy function, which is a function that maps the current reservoir state to optimal action at the next step, is represented by a deep convolution neural network (CNN). This policy network is trained using DRL on simulation runs of a large number of different scenarios generated to cover a “range of applicability.” Once trained, the DRL AI can be applied to obtain optimized FDPs for new scenarios at a minimum computational cost. While the proposed methodology is general, in this paper, we applied it to develop a DRL AI that can provide optimized FDPs for greenfield primary depletion problems with vertical wells. This AI is trained on more than 3×106 scenarios with different geological structures, rock and fluid properties, operational constraints, and economic conditions, and thus has a wide range of applicability. After it is trained, the DRL AI yields optimized FDPs for new scenarios within seconds. The solutions from the DRL AI suggest that starting with no reservoir engineering knowledge, the DRL AI has developed the intelligence to place wells at “sweet spots,” maintain proper well spacing and well count, and drill early. In a blind test, it is demonstrated that the solution from the DRL AI outperforms that from the reference agent, which is an optimized pattern drilling strategy almost 100% of the time. The DRL AI is being applied to a real field and preliminary results are promising. Because the DRL AI optimizes a policy rather than a plan for one particular scenario, it can be applied to obtain optimized development plans for different scenarios at a very low computational cost. This is fundamentally different from traditional optimization methods, which not only require thousands of runs for one scenario but also lack the ability to generalize to new scenarios.
A general control policy framework based on deep reinforcement learning (DRL) is introduced for closed-loop decision making in subsurface flow settings. Traditional closedloop modeling workflows in this context involve the repeated application of data assimilation/history matching and robust optimization steps. Data assimilation can be particularly challenging in cases where both the geological style (scenario) and individual model realizations are uncertain. The closed-loop reservoir management (CLRM) problem is formulated here as a partially observable Markov decision process, with the associated optimization problem solved using a proximal policy optimization algorithm. This provides a control policy that instantaneously maps flow data observed at wells (as are available in practice) to optimal well pressure settings. The policy is represented by a temporal convolution and gated transformer blocks. Training is performed in a preprocessing step with an ensemble of prior geological models, which can be drawn from multiple geological scenarios. Example cases involving the production of oil via water injection, with both 2D and 3D geological models, are presented. The DRL-based methodology is shown to result in an NPV increase of 15% (for the 2D cases) and 33% (3D cases) relative to robust optimization over prior models, and to an average improvement of 4% in NPV relative to traditional CLRM. The solutions from the control policy are found to be comparable to those from deterministic optimization, in which the geological model is assumed to be known, even when multiple geological scenarios are considered. The control policy approach results in a 76% decrease in computational cost relative to traditional CLRM with the algorithms and parameter settings considered in this work.
Oil and gas field development optimization, which involves the determination of the optimal number of wells, their drilling sequence and locations while satisfying operational and economic constraints, represents a challenging computational problem. In this work, we present a deep-reinforcement-learning-based artificial intelligence agent that could provide optimized development plans given a basic description of the reservoir and rock/fluid properties with minimal computational cost. This artificial intelligence agent, comprising of a convolutional neural network, provides a mapping from a given state of the reservoir model, constraints, and economic condition to the optimal decision (drill/do not drill and well location) to be taken in the next stage of the defined sequential field development planning process. The state of the reservoir model is defined using parameters that appear in the governing equations of the two-phase flow (such as well index, transmissibility, fluid mobility, and accumulation, etc.,). A feedback loop training process referred to as deep reinforcement learning is used to train an artificial intelligence agent with such a capability. The training entails millions of flow simulations with varying reservoir model descriptions (structural, rock and fluid properties), operational constraints (maximum liquid production, drilling duration, and water-cut limit), and economic conditions. The parameters that define the reservoir model, operational constraints, and economic conditions are randomly sampled from a defined range of applicability. Several algorithmic treatments are introduced to enhance the training of the artificial intelligence agent. After appropriate training, the artificial intelligence agent provides an optimized field development plan instantly for new scenarios within the defined range of applicability. This approach has advantages over traditional optimization algorithms (e.g., particle swarm optimization, genetic algorithm) that are generally used to find a solution for a specific field development scenario and typically not generalizable to different scenarios. The performance of the artificial intelligence agents for two- and three-dimensional subsurface flow are compared to well-pattern agents. Optimization results using the new procedure are shown to significantly outperform those from the well pattern agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.