The control parameterization method is a popular numerical technique for solving optimal control problems. The main idea of control parameterization is to discretize the control space by approximating the control function by a linear combination of basis functions. Under this approximation scheme, the optimal control problem is reduced to an approximate nonlinear optimization problem with a finite number of decision variables. This approximate problem can then be solved using nonlinear programming techniques. The aim of this paper is to introduce the fundamentals of the control parameterization method and survey its various applications to non-standard optimal control problems. Topics discussed include gradient computation, numerical convergence, variable switching times, and methods for handling state constraints. We conclude the paper with some suggestions for future research.
In this paper, we develop a computational method for a class of optimal control problems where the objective and constraint functionals depend on two or more discrete time points. These time points can be either fixed or variable. Using the control parametrization technique and a time scaling transformation, this type of optimal control problem is approximated by a sequence of approximate optimal parameter selection problems. Each of these approximate problems can be viewed as a finite dimensional optimization problem. New gradient formulae for the cost and constraint functions are derived. With these gradient formulae, standard gradient-based optimization methods can be applied to solve each approximate optimal parameter selection problem. For illustration, two numerical examples are solved.
We consider an optimal control problem with a nonlinear continuous inequality constraint. Both the state and the control are allowed to appear explicitly in this constraint. By discretizing the control space and applying a novel transformation, a corresponding class of semi-infinite programming problems is derived. A solution of each problem in this class furnishes a suboptimal control for the original problem. Furthermore, we show that such a solution can be computed efficiently using a penalty function method. On the basis of these two ideas, an algorithm that computes a sequence of suboptimal controls for the original problem is proposed. Our main result shows that the cost of these suboptimal controls converges to the minimum cost. For illustration, an example problem is solved.
We consider a switched-capacitor DC/DC power converter with variable switching instants. The determination of optimal switching instants giving low output ripple and strong load regulation is posed as a non-smooth dynamic optimization problem. By introducing a set of auxiliary differential equations and applying a time-scaling transformation, we formulate an equivalent optimization problem with semi-infinite constraints. Existing algorithms can be applied to solve this smooth semi-infinite optimization problem. The existence of an optimal solution is also established. For illustration, the optimal switching instants for a practical switched-capacitor DC/DC power converter are determined using this approach
a b s t r a c tThis paper considers the problem of using noisy output data to estimate unknown time-delays and unknown system parameters in a general nonlinear time-delay system. We formulate the problem as a dynamic optimization problem in which the unknown quantities are decision variables to be chosen optimally, with the cost function penalizing the mean and variance of the least-squares error between actual and predicted system output. Since the time-delays and system parameters influence the cost function implicitly through the governing time-delay system, the cost function's gradient -which is required to solve the problem using gradient-based optimization techniques -cannot be computed analytically using standard differentiation rules. We instead develop two computational methods for evaluating this gradient: one involves solving an auxiliary time-delay system forward in time; the other involves solving an auxiliary time-advance system backward in time. On this basis, we propose an efficient optimization algorithm for determining optimal estimates for the time-delays and system parameters. We conclude the paper by examining the performance of this algorithm on a dynamic model of a continuously-stirred tank reactor.
In this paper, we consider a challenging optimal control problem in which the terminal time is determined by a stopping criterion. This stopping criterion is defined by a smooth surface in the state space; when the state trajectory hits this surface, the governing dynamic system stops. By restricting the controls to piecewise constant functions, we derive a finite-dimensional approximation of the optimal control problem. We then develop an efficient computational method, based on nonlinear programming, for solving the approximate problem. We conclude the paper with four numerical examples.
model we consider four parameters. In the latter, we selected two different objective functions leading to uni-modal and bi-modal stationary distributions. The techniques presented in this technical note could also aid the design of novel gene regulatory circuits with desirable properties, or it could be used in determining how to best combine circuits-each matrix Hi representing a different one. ACKNOWLEDGMENT The authors wish to thank the anonymous reviewers for their thorough and helpful comments. The quality of the technical note was greatly improved through their significant contribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.