2019
DOI: 10.1016/j.automatica.2019.04.017
|View full text |Cite
|
Sign up to set email alerts
|

On the structure of the set of active sets in constrained linear quadratic regulation

Abstract: The constrained linear quadratic regulation problem is solved by a continuous piecewise affine function on a set of state space polytopes. It is an obvious question whether this solution can be built up iteratively by increasing the horizon, i.e., by extending the classical backward dynamic programming solution for the unconstrained case to the constrained case. Unfortunately, however, the piecewise affine solution for horizon N is in general not contained in the piecewise affine law for horizon N + 1. We show… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

6
3

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 18 publications
1
18
0
Order By: Relevance
“…Extending the optimal active sets for horizon N − 1 to those for N formally corresponds to a backward dynamic programming step [12]. It is an obvious question to ask whether also the geometric approaches (see Sect.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Extending the optimal active sets for horizon N − 1 to those for N formally corresponds to a backward dynamic programming step [12]. It is an obvious question to ask whether also the geometric approaches (see Sect.…”
Section: Discussionmentioning
confidence: 99%
“…The essential idea is as follows. An optimal active set of the constrained LQR problem with horizon N always contains an optimal active set of the same problem with horizon N − 1 [12,Prop. 1].…”
Section: Introductionmentioning
confidence: 99%
“…Closed-loop optimal sequences of affine laws [14,15] With this approach all polytopes (and their feedback laws) that contain a state of the closed-loop trajectory can be computed from the solution of a QP at the current state, i.e., a single point x ∈ X f . If the terminal constraints are inactive at the current state x, then the solution of the QP for the state x does not only provide a single feedback law but the entire sequence of optimal feedback laws and their polytopes of validity along the closed-loop trajectory.…”
Section: New Approachesmentioning
confidence: 99%
“…We do not treat numerical methods such as tailored optimization algorithms here, but exploit the piecewise-affine structure of the solution [1,23,22]. We stress that we never calculate explicit control laws, but the present paper belongs to a group of works [3,18,17,4,10] that exploit the affine structure, or the corresponding structure of the set of active sets [8,5,9,20,19], to accelerate online MPC.…”
Section: Introductionmentioning
confidence: 99%