2016 IEEE 55th Conference on Decision and Control (CDC) 2016
DOI: 10.1109/cdc.2016.7798458
|View full text |Cite
|
Sign up to set email alerts
|

Event-triggered H-infinity control for unknown continuous-time linear systems using Q-learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…In [22], we have brought together intermittent mechanisms to alleviate the burden of an actor-critic framework. This was later extended for systems with unknown dynamics under a Q-learning framework in [23]. The authors of [24] developed a controller with intermittent communication for a multi-agent system, whose dwell-time conditions, needed for stability, as well as safety constraints were expressed by metric temporal logic specifications.…”
Section: Related Workmentioning
confidence: 99%
“…In [22], we have brought together intermittent mechanisms to alleviate the burden of an actor-critic framework. This was later extended for systems with unknown dynamics under a Q-learning framework in [23]. The authors of [24] developed a controller with intermittent communication for a multi-agent system, whose dwell-time conditions, needed for stability, as well as safety constraints were expressed by metric temporal logic specifications.…”
Section: Related Workmentioning
confidence: 99%
“…Proof 1. The proof for this part can be referred to the work of Vamvoudakis and Ferraz and is omitted here. 2.…”
Section: Model‐based Intermittent Feedback Designmentioning
confidence: 99%
“…Denote the worst‐case disturbance for the intermittent feedback u s as ws. Then, the control policy is optimal with the optimal cost with the worst‐case disturbance provided in the work of Vamvoudakis and Ferraz as follows: Jx0;u(·),w(·),us(·)=12x0TPx0+120u(τ)us(τ)TRu(τ)us(τ)dτγ220w(τ)ws(τ)2dτ. Then, this shall be a trivial extension of the disturbance modeled as an impulsive system as is done with the control. Both the worst‐case disturbance wd and the corresponding optimal continuous feedback u ∗ will be able to be learned using the intermittent Q ‐learning algorithm.…”
Section: Model‐free Intermittent Feedback Design: Q‐learning Algorithmmentioning
confidence: 99%
See 2 more Smart Citations