The ability of a model-based real-time optimization (RTO) scheme to converge to the plant optimum relies on the ability of the underlying process model to predict the plant’s necessary conditions of optimality (NCO). These include the values and gradients of the active constraints, as well as the gradient of the cost function. Hence, in the presence of plant−model mismatch or unmeasured disturbances, one could use (estimates of) the plant NCO to track the plant optimum. This paper shows how to formulate a modifed optimization problem that incorporates such information. The so-called modifiers, which express the difference between the measured or estimated plant NCO and those predicted by the model, are added to the constraints and the cost function of the modified optimization problem and are adapted iteratively. Local convergence and model-adequacy issues are analyzed. The modifier-adaptation scheme is tested experimentally via the RTO of a three-tank system.
Abstract. Theory and implementation for the global optimization of a wide class of algorithms is presented via convex/affine relaxations. The basis for the proposed relaxations is the systematic construction of subgradients for the convex relaxations of factorable functions by McCormick [Math. Prog., 10 (1976), pp. 147-175]. Similar to the convex relaxation, the subgradient propagation relies on the recursive application of a few rules, namely, the calculation of subgradients for addition, multiplication, and composition operations. Subgradients at interior points can be calculated for any factorable function for which a McCormick relaxation exists, provided that subgradients are known for the relaxations of the univariate intrinsic functions. For boundary points, additional assumptions are necessary. An automated implementation based on operator overloading is presented, and the calculation of bounds based on affine relaxation is demonstrated for illustrative examples. Two numerical examples for the global optimization of algorithms are presented. In both examples a parameter estimation problem with embedded differential equations is considered. The solution of the differential equations is approximated by algorithms with a fixed number of iterations. 1. Introduction. The development of deterministic algorithms based on continuous and/or discrete branch-and-bound [10,17,18] has facilitated the global optimization of nonconvex programs. The basic principle of branch-and-bound, and related algorithms such as branch-and-cut [19] and branch-and-reduce [27], is to bound the optimal objective value between a lower bound and an upper bound. By branching on the host set, these bounds become tighter and eventually converge. For minimization, upper bounds are typically obtained via a feasible point or via a local solution of the original program. For the lower bound, typically a convex or affine relaxation of the nonconvex program is constructed and solved to global optimality via a convex solver. Convex and concave envelopes or tight relaxations are known for a variety of simple nonlinear terms [1,33,35], and this allows the construction of convex and concave relaxations for a quite general class of functions through several methods [21,2,33,12]. Simple lower bounds from interval analysis are also widely used in global optimization, e.g., [6,7,25]. Such bounds are often weaker but less computationally expensive to evaluate than relaxation-based bounds. For instance, for a box-constrained problem, no linear program (LP) or convex nonlinear program (NLP) needs to be solved.The majority of the literature on global optimization considers nonconvex programs for which explicit functions are known for the objective and constraints. A more
For good performance in practice, real-time optimization schemes need to be able to deal with the inevitable plant-model mismatch problem. Unlike the two-step schemes combining parameter estimation and optimization, the modifier-adaptation approach does not require the model parameters to be estimated on-line. Instead, it uses information regarding the constraints and selected gradients to improve the plant operation. The dual modifier-adaptation approach presented in this paper drives the process towards optimality, while paying attention to the accuracy of the estimated gradients. The gradients are estimated from successive operating points generated by the optimization algorithm. The novelty lies in the development of an upper bound on the norm of the gradient errors, which is used as a constraint when determining the next operating point. The proposed approach is demonstrated via numerical simulation for both an unconstrained and a constrained problem. A Dual Modifier-Adaptation Approach forReal-Time Optimization AbstractFor good performance in practice, real-time optimization schemes need to be able to deal with the inevitable plant-model mismatch problem. Unlike the two-step schemes combining parameter estimation and optimization, the modifier-adaptation approach does not require the model parameters to be estimated on-line. Instead, it uses information regarding the constraints and selected gradients to improve the plant operation. The dual modifier-adaptation approach presented in this paper drives the process towards optimality, while paying attention to the accuracy of the estimated gradients. The gradients are estimated from successive operating points generated by the optimization algorithm. The novelty lies in the development of an upper bound on the norm of the gradient errors, which is used as a constraint when determining the next operating point. The proposed approach is demonstrated via numerical simulation for both an unconstrained and a constrained problem.
An overview of global methods for dynamic optimization and mixed-integer dynamic optimization (MIDO) is presented, with emphasis placed on the control parametrization approach. These methods consist of extending existing continuous and mixed-integer global optimization algorithms to encompass solution of problems with ODEs embedded. A prerequisite for so doing is a convexity theory for dynamic optimization as well as the ability to build valid convex relaxations for Bolza-type functionals. For solving dynamic optimization problems globally, our focus is on the use of branch-and-bound algorithms; on the other hand, MIDO problems are handled by adapting the outer-approximation algorithm originally developed for mixed-integer nonlinear problems (MINLPs) to optimization problems embedding ODEs. Each of these algorithms is thoroughly discussed and illustrated. Future directions for research are also discussed, including the recent developments of general, convex, and concave relaxations for the solutions of nonlinear ODEs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.