2014
DOI: 10.1109/tac.2013.2270037
|View full text |Cite
|
Sign up to set email alerts
|

Improved Feed-Forward Command Governor Strategies for Constrained Discrete-Time Linear Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 29 publications
(41 reference statements)
0
8
0
Order By: Relevance
“…Finally, a more extreme scenario that is worth mentioning is the case where no measurement is available [52]. This is equivalent to the case when observer (29) is used with L = 0.…”
Section: Disturbance Noise and Output Feedbackmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, a more extreme scenario that is worth mentioning is the case where no measurement is available [52]. This is equivalent to the case when observer (29) is used with L = 0.…”
Section: Disturbance Noise and Output Feedbackmentioning
confidence: 99%
“…Reference/command governors not making use of any measurements are known in the literature as feedforward reference/command governors [101,103]. The fact that RG schemes can be built even in the absence of measurements of the state (although at the cost of a conservatism that grows with the size of the disturbance set W) is due to the fact that since A is Schur, the nominal system is open loop detectable [52].…”
Section: Disturbance Noise and Output Feedbackmentioning
confidence: 99%
“…Our SFC solutions can be viewed as a support for other control solutions including fuzzy, neural, sliding mode and adaptive control (Blažič et al, 2010;Precup et al, 2009Precup et al, , 2012Ruano et al, 2002). The performance can be improved by inserting sensitivity, robustness objectives and constraints (Casavola et al, 2014;Gutiérrez-Carvajal et al, 2016). The pole placement method applied in this paper can be replaced by the optimal design and tuning by means of classical or modern optimization algorithms (Bandarabadi et al, 2015;Johanyák, 2015;Menchaca-Mendez and Coello Coello, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…It generates a virtual command which is designed to be close to the original as much as possible based on the reference received from the human operator and the outputs of closed-loop system (measured or estimated). Meanwhile, the governor does not change the dynamic performance with respect to controlled object which could be already stabilized by a closed-loop controller [11], [12].…”
Section: Introductionmentioning
confidence: 99%