The key ingredients to successful real-time reservoir management, also known as a Bclosed-loop^approach, include efficient optimization and model-updating (historymatching) algorithms, as well as techniques for efficient uncertainty propagation. This work discusses a simplified implementation of the closed-loop approach that combines efficient optimal control and model-updating algorithms for real-time production optimization. An adjoint model is applied to provide gradients of the objective function with respect to the well controls; these gradients are then used with standard optimization algorithms to determine optimum well settings. To enable efficient history matching, Bayesian inversion theory is used in combination with an optimal representation of the unknown parameter field in terms of a KarhunenYLoeve expansion. This representation allows for the direct application of adjoint techniques for the history match while assuring that the two-point geostatistics of the reservoir description are maintained. The benefits and efficiency of the overall closed-loop approach are demonstrated through real-time optimizations of net present value (NPV) for synthetic reservoirs under waterflood subject to production constraints and uncertain reservoir description. For two example cases, the closed-loop optimization methodology is shown to provide a substantial improvement in NPV over the base case, and the results are seen to be quite close to those obtained when the reservoir description is known a priori.
History-matching problems, in which reservoir
Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.
Summary The general petroleum-production optimization problem falls into the category of optimal control problems with nonlinear control-state path inequality constraints (i.e., constraints that must be satisfied at every time step), and it is acknowledged that such path constraints involving state variables can be difficult to handle. Currently, one category of methods implicitly incorporates the constraints into the forward and adjoint equations to address this issue. However, these methods either are impractical for the production optimization problem or require complicated modifications to the forward-model equations (the simulator). Therefore, the usual approach is to formulate this problem as a constrained nonlinear-programming (NLP) problem in which the constraints are calculated explicitly after the dynamic system is solved. The most popular of this category of methods for optimal control problems has been the penalty-function method and its variants, which are, however, extremely inefficient. All other constrained NLP algorithms require a gradient for each constraint, which is impractical for an optimal control problem with path constraints because one adjoint must be solved for each constraint at each time step in every iteration. The authors propose an approximate feasible-direction NLP algorithm based on the objective-function gradient and a combined gradient for the active constraints. This approximate feasible direction is then converted into a true feasible direction by projecting it onto the active constraints and solving the constraints during the forward-model evaluation itself. The approach has various advantages. First, only two adjoint evaluations are required in each iteration. Second, the solutions obtained are feasible (within a specified tolerance) because feasibility is maintained by the forward model itself, implying that any solution can be considered a useful solution. Third, large step sizes are possible during the line search, which may lead to significant reductions in the number of forward-and adjoint-model evaluations and large reductions in the magnitude of the objective function. Through two examples, the authors demonstrate that this algorithm provides a practical and efficient strategy for production optimization with nonlinear path constraints.
A key reservoir management decision taken throughout the life of a reservoir is the determination of optimal well locations that maximizes asset value (such as Net Present Value, NPV). Because this well placement optimization problem is a discrete-parameter problem (well locations are discrete parameters in the simulation model), gradients of the objective function (NPV) with respect to these parameters are not defined. Thus, gradient-based methods have not found much applicability to this problem, and most existing algorithms applied to this problem are stochastic in nature, such as genetic algorithms, simulated annealing, and stochastic perturbation methods. These methods are usually quite inefficient requiring hundreds of simulations and thus may have limited application to large-scale simulation models with many wells. We propose a novel, continuous approximation to the original discrete-parameter well placement problem such that gradients can be calculated on the approximate problem, and gradient-based algorithms can then be employed for efficiently determining the optimal well locations. The basic idea is to first replace the discrete parameters (i, j well location indices) with their continuous counterparts in the spatial domain (x, y well locations) and then obtain a continuous functional relationship between the objective function and these continuous parameters. Such a functional relationship is obtained by replacing the discontinuous Dirac-delta functions (defining wells as point sources) in the underlying governing PDE with continuous functions (which in the limit tends to the Dirac-delta function, such as the bivariate Gaussian function). Numerical discretization of the modified PDE leads to well terms in the mass balance equations that are continuous functions of the continuous well location variables. As a result of this continuous functional relationship, adjoints and gradient-based optimizations algorithms can now be applied to obtain the optimal well locations. The efficiency and practical applicability of the approach is demonstrated on a few synthetic waterflood optimization problems. Introduction A key reservoir management decision necessary throughout the life of an oilfield is the determination of optimal well locations that maximize asset value. The current industry practice to do so is usually through manual approaches wherein the engineer essentially uses engineering judgment and numerical simulation to determine such locations. Although such an approach may be viable for small reservoirs with a small number of wells, it is unlikely that such approaches will be applicable when dealing with large reservoirs (with large-scale simulation models) and a large number of wells. Recently however, there has been an increasing interest in solving this problem more efficiently with automatic optimization algorithms. This optimal well placement problem is usually formulated as a discrete parameter optimization problem, because the well location variables are discrete variables (i, j indices of grid blocks where wells are located). Because of the discrete nature of the problem, gradients of the objective function (NPV for example) with respect to these discrete variables do not exist. As a result, gradient-based optimization algorithms have not found much applicability to this problem, and most existing algorithms applied to this problem are stochastic gradient-free algorithms, such as genetic algorithms (Montes and Bartolome, 2001, Yeten, 2003), simulated annealing (Beckner and Song, 1995), and stochastic perturbation methods (Spall, 2003, Bangerth et al., 2006). Although these algorithms are easy to apply and are supposedly global in nature, they are usually quite inefficient requiring hundreds of simulations and thus may have limited application to large-scale simulation models with many wells. Furthermore, they do not guarantee a monotonic increase in the objective function with successive iterations, implying that increasing the computational effort may not necessarily provide a better optimum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.