2017
DOI: 10.1080/10556788.2017.1288730
|View full text |Cite
|
Sign up to set email alerts
|

On solving hybrid optimal control problems with higher index DAEs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(22 citation statements)
references
References 33 publications
0
21
1
Order By: Relevance
“…Below, we present the results of applying the presented algorithm to solving two optimal control problems based on hybrid systems. They have been previously discussed in (Pytlak and Suski, 2017), but here we show results for different versions of these problems. Please note, that the purpose of these examples is purely illustrative within the discussion of capabilities of the implemented solver and they do not necessarily represent sensible problems of real engineering importance or application.…”
Section: Examplesmentioning
confidence: 83%
See 1 more Smart Citation
“…Below, we present the results of applying the presented algorithm to solving two optimal control problems based on hybrid systems. They have been previously discussed in (Pytlak and Suski, 2017), but here we show results for different versions of these problems. Please note, that the purpose of these examples is purely illustrative within the discussion of capabilities of the implemented solver and they do not necessarily represent sensible problems of real engineering importance or application.…”
Section: Examplesmentioning
confidence: 83%
“…analogical to formulas (32)-(33). The formulae (44)-(45) are derived in (Pytlak and Suski, 2017). To solve the optimal control problem, we replace control functions by their piecewise constant approximations and follow the optimization procedure outlined below.…”
Section: Numerical Proceduresmentioning
confidence: 99%
“…The numerical procedure which we used to solve the problem (9), (1)–(8),(10)–(15) is described, to much extent, in [20] and [21]. The main features of the procedure are: it is based on the Radau IIa version of a Runge–Kutta method for integrating differential equations,it uses adjoint equations to evaluate gradients of functions defining the optimization problem.…”
Section: Methodsmentioning
confidence: 99%
“…To avoid additional Newton iterations for integration, we can use the implicit function theorem [3,6]. Specifically, k n is a function of ξ n from (8). Then, we can obtain from the implicit function theorem that…”
Section: Gradient Evaluation By Adjoint Sensitivity Propagationmentioning
confidence: 99%
“…Gradients were evaluated by propagating adjoint sensitivity in discrete time. Then, this method was extended to higher-index DAEs [7,8]. This method is more efficient than forward one when the number of constraints is less than that of optimization variables [2].…”
Section: Introductionmentioning
confidence: 99%