Abstract:For the solution of optimal control problem involving an index-1 differential-algebraic equation, an efficient function evaluation algorithm is proposed in this paper. In the evaluation procedure, the state equation is propagated forwards, then, adjoint sensitivity is propagated backwards. Thus, it is computationally more efficient than forward sensitivity propagation when the number of constraints is less than that of optimization variables. In order to reduce Newton iterations, the adjoint sensitivity is der… Show more
“…Thus, this method presents an efficient way to solve optimal control problems (OCPs) of middle or small scale without the assistance of commercial sparse linear algebraic algorithms. Then, this method is extended to sequential (or single-shooting) methods in [10,9], where corresponding forward and adjoint (backward) propagation algorithms are proposed for gradient evaluation. It is also shown in [10] that integration accuracy can be guaranteed by introducing constraints restricting the integration error.…”
mentioning
confidence: 99%
“…This technique cannot guarantee the feasibility of solution in certain cases [8]. Moreover, when there are more constraints than optimization variables, the adjoint method proposed in [9] will be less efficient than the forward one [10]. Then, is there an efficient adjoint method to compute the optimal control subject to continuous inequality constraints?…”
mentioning
confidence: 99%
“…Then, is there an efficient adjoint method to compute the optimal control subject to continuous inequality constraints? In this paper, the exact penalty function proposed in [8] is introduced in the adjoint method in [9], where continuous inequality constraints are transformed and penalized in the cost so that the surrogate problem has only box constraints on the variables. This penalty function is smooth and locally exact, which derives from [7] and has a form similar to those in [27,28,12,11,14,3].…”
mentioning
confidence: 99%
“…Compared with [10], the proposed method is simplified, as time-scaling transformation [15] is not required to derive the sensitivity with respect to time steps. The proof of the adjoint sensitivity propagation rule is also provided in this paper, which serves as a complement to [9]. This computational method is based on the idea of control parametrization [13], and achieves improved efficiency by embedding a tailored integrator.…”
mentioning
confidence: 99%
“…This computational method is based on the idea of control parametrization [13], and achieves improved efficiency by embedding a tailored integrator. It is demonstrated in [9] that the proposed IRK integrator enhanced by tangential prediction is superior to that in [17] when computational time is stringently constrained. As an improvement to method in [9], the computational method proposed in this paper provides an opportunity to solve OCPs with complex path constraints in real time.…”
Adjoint methods applied to solve optimal control problems (OCPs) have a restriction that the number of constraints shall be less than that of optimization variables. Otherwise, they are less efficient than the forward methods. This paper proposes an efficient adjoint method to solve OCPs for index-1 differential algebraic systems with continuous-time inequality constraints. The continuous-time inequality constraints are not discretized on time grid but transformed into integrals and penalized in the cost through an exact penalty function. Thus, all the constraints except for box constraints on optimization variables can be removed. Furthermore, a lifted implicit Runge-Kutta (IRK) integrator with adjoint sensitivity propagation is employed to accelerate the function and gradient evaluation procedure. Based on a sensitivity update technique, the number of Newton iterations involved in forward simulation can be reduced to one. Besides this, Lagrange interpolation is applied to approximate the states not on collocation points such that integrals in the penalty function can be evaluated on the same grid for forward simulation. Complexity analysis shows that, for the proposed algorithm, computation involved in the sensitivity propagation is comparable to that of forward one. Numerical simulations on the optimal maneuvering a Delta robot demonstrate that the computational speed of the proposed adjoint algorithm is comparable to that of our previous one, which is based on the lifted IRK integrator and forward sensitivity propagation.
“…Thus, this method presents an efficient way to solve optimal control problems (OCPs) of middle or small scale without the assistance of commercial sparse linear algebraic algorithms. Then, this method is extended to sequential (or single-shooting) methods in [10,9], where corresponding forward and adjoint (backward) propagation algorithms are proposed for gradient evaluation. It is also shown in [10] that integration accuracy can be guaranteed by introducing constraints restricting the integration error.…”
mentioning
confidence: 99%
“…This technique cannot guarantee the feasibility of solution in certain cases [8]. Moreover, when there are more constraints than optimization variables, the adjoint method proposed in [9] will be less efficient than the forward one [10]. Then, is there an efficient adjoint method to compute the optimal control subject to continuous inequality constraints?…”
mentioning
confidence: 99%
“…Then, is there an efficient adjoint method to compute the optimal control subject to continuous inequality constraints? In this paper, the exact penalty function proposed in [8] is introduced in the adjoint method in [9], where continuous inequality constraints are transformed and penalized in the cost so that the surrogate problem has only box constraints on the variables. This penalty function is smooth and locally exact, which derives from [7] and has a form similar to those in [27,28,12,11,14,3].…”
mentioning
confidence: 99%
“…Compared with [10], the proposed method is simplified, as time-scaling transformation [15] is not required to derive the sensitivity with respect to time steps. The proof of the adjoint sensitivity propagation rule is also provided in this paper, which serves as a complement to [9]. This computational method is based on the idea of control parametrization [13], and achieves improved efficiency by embedding a tailored integrator.…”
mentioning
confidence: 99%
“…This computational method is based on the idea of control parametrization [13], and achieves improved efficiency by embedding a tailored integrator. It is demonstrated in [9] that the proposed IRK integrator enhanced by tangential prediction is superior to that in [17] when computational time is stringently constrained. As an improvement to method in [9], the computational method proposed in this paper provides an opportunity to solve OCPs with complex path constraints in real time.…”
Adjoint methods applied to solve optimal control problems (OCPs) have a restriction that the number of constraints shall be less than that of optimization variables. Otherwise, they are less efficient than the forward methods. This paper proposes an efficient adjoint method to solve OCPs for index-1 differential algebraic systems with continuous-time inequality constraints. The continuous-time inequality constraints are not discretized on time grid but transformed into integrals and penalized in the cost through an exact penalty function. Thus, all the constraints except for box constraints on optimization variables can be removed. Furthermore, a lifted implicit Runge-Kutta (IRK) integrator with adjoint sensitivity propagation is employed to accelerate the function and gradient evaluation procedure. Based on a sensitivity update technique, the number of Newton iterations involved in forward simulation can be reduced to one. Besides this, Lagrange interpolation is applied to approximate the states not on collocation points such that integrals in the penalty function can be evaluated on the same grid for forward simulation. Complexity analysis shows that, for the proposed algorithm, computation involved in the sensitivity propagation is comparable to that of forward one. Numerical simulations on the optimal maneuvering a Delta robot demonstrate that the computational speed of the proposed adjoint algorithm is comparable to that of our previous one, which is based on the lifted IRK integrator and forward sensitivity propagation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.