2020 American Control Conference (ACC) 2020
DOI: 10.23919/acc45564.2020.9147255
|View full text |Cite
|
Sign up to set email alerts
|

Model-free Learning for Safety-critical Control Systems: A Reference Governor Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…However, for more complex scenarios including those involving multiple vehicle interactions, an explicit model is typically not available. Therefore, an emerging approach is to train a safety supervisor through learning, as investigated in References 26,[31][32][33] Note that the learning objectives of those aforementioned learning-based approaches to control policy design and a learning-based approach to safety supervisor design are different: The former aims at developing a nominal control policy that achieves optimal performance with regard to (the expected value of) a reward function, while the latter focuses on the safety aspect and typically pursues minimum modification to the nominal control.…”
Section: Introductionmentioning
confidence: 99%
“…However, for more complex scenarios including those involving multiple vehicle interactions, an explicit model is typically not available. Therefore, an emerging approach is to train a safety supervisor through learning, as investigated in References 26,[31][32][33] Note that the learning objectives of those aforementioned learning-based approaches to control policy design and a learning-based approach to safety supervisor design are different: The former aims at developing a nominal control policy that achieves optimal performance with regard to (the expected value of) a reward function, while the latter focuses on the safety aspect and typically pursues minimum modification to the nominal control.…”
Section: Introductionmentioning
confidence: 99%
“…Simulation results are reported which illustrate learning and vehicle response during step commands, sine-and-dwell tests and when driving conditions change. This paper is distinguished from [28] by providing more details of LRG algorithms, extending the theoretical analysis of LRG, providing detailed proofs, and illustrating a practical application to the fuel truck rollover avoidance. This paper is also different from [29], where the latter focuses on an integration of a neural network into LRG to handle variability in road conditions and speed up online computations; however, for such an approach based on a neural network, theoretical constraint enforcement guarantees are no longer available.…”
Section: Introductionmentioning
confidence: 99%
“…The present paper considers the application of a recently proposed Learning Reference Governor (LRG) [24][25][26][27] to performing ARPOD maneuvers. The LRG is an add-on scheme to a nominal control system and is used to enforce pointwise-in-time state and control constraints.…”
mentioning
confidence: 99%
“…The LRG is an add-on scheme to a nominal control system and is used to enforce pointwise-in-time state and control constraints. Its viability has been previously demonstrated in applications to vehicle rollover avoidance [24][25][26] and misfire avoidance in spark-ignition engines [27]. In our application to spacecraft control, the LRG monitors and modifies the command generated by a higher-level ARPOD planning algorithm when it becomes necessary to avoid constraint violations.…”
mentioning
confidence: 99%
See 1 more Smart Citation