2020
DOI: 10.1049/iet-cta.2020.0098
|View full text |Cite
|
Sign up to set email alerts
|

H∞ tracking control via variable gain gradient descent‐based integral reinforcement learning for unknown continuous time non‐linear system

Abstract: Optimal tracking of continuous‐time non‐linear systems has been extensively studied in literature. However, in several applications, absence of knowledge about system dynamics poses a severe challenge in solving the optimal tracking problem. This has found growing attention among researchers recently, and integral reinforcement learning based method augmented with actor neural network (NN) have been deployed to this end. However, very few studies have been directed to model‐free H∞ optimal tracking control tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Moreover, IRLbased event-triggered ADP is introduced in [42] to cut the need for drift dynamics and to control a CT nonlinear system with saturated input. In [43], the authors considered a CT neural network parameter update law based on variable gain gradient descent augmented with robust terms for model-free IRL-based H ∞ optimal tracking control problem of a CT nonlinear system with unknown dynamics for disturbance rejection.…”
Section: A Literature Reviewmentioning
confidence: 99%
“…Moreover, IRLbased event-triggered ADP is introduced in [42] to cut the need for drift dynamics and to control a CT nonlinear system with saturated input. In [43], the authors considered a CT neural network parameter update law based on variable gain gradient descent augmented with robust terms for model-free IRL-based H ∞ optimal tracking control problem of a CT nonlinear system with unknown dynamics for disturbance rejection.…”
Section: A Literature Reviewmentioning
confidence: 99%
“…Off-policy RL algorithms have been developed to achieve the H ∞ tracking control of continuous-time (CT) nonlinear systems with unknown system dynamics. 20,21 Q-learning 22 is one of the RL algorithms that learns the optimal Q-function. The Q-function explicitly contains the control actions and evaluates all possible actions for each state.…”
Section: Introductionmentioning
confidence: 99%
“…RL‐based techniques have also been applied to provide the model‐free solution to the H$$ {H}_{\infty } $$ tracking control problem. Off‐policy RL algorithms have been developed to achieve the H$$ {H}_{\infty } $$ tracking control of continuous‐time (CT) nonlinear systems with unknown system dynamics 20,21 . Q‐learning 22 is one of the RL algorithms that learns the optimal Q‐function.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations