2019
DOI: 10.1049/iet-cta.2019.0560
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive optimal output feedback tracking control for unknown discrete‐time linear systems using a combined reinforcement Q‐learning and internal model method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…These periodic disturbances can usually be described by some kinds of sinusoidal signals. For a tracking system, to reduce the tracking error, some strategies have been adopted in literature, such as internal model control(IMC) [8], [9], predictive control [10], [11], sliding mode control [12], [13]. Among these methods, internal model control is an effective method to eliminate certain kinds of interferences like periodic sinusoidal disturbances.…”
Section: Introductionmentioning
confidence: 99%
“…These periodic disturbances can usually be described by some kinds of sinusoidal signals. For a tracking system, to reduce the tracking error, some strategies have been adopted in literature, such as internal model control(IMC) [8], [9], predictive control [10], [11], sliding mode control [12], [13]. Among these methods, internal model control is an effective method to eliminate certain kinds of interferences like periodic sinusoidal disturbances.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, an adaptive PID controller was proposed in [14], a self-tuning PID controller for a soccer robot was proposed in [15], and a Q-Learning approach was used to tune an adaptive PID controller in [16]. Moreover, the Q-learning algorithm was applied to the tracking problem for discrete-time systems with an H ∞ approach [17] and linear quadratic tracking control [18,19]. Recently, some relevant applications of model-free reinforcement learning for tracking problems can be found in [20,21].…”
Section: Introductionmentioning
confidence: 99%
“…3) Inspired by the research on ADP-based linear quadratic tracking problems [32], [33], we extended the system states with an approximated external disturbance and introduced an intermedia auxiliary system to generate data for robust critic learning. To the best of our knowledge, this is the first research article that combines robust design and active disturbance attenuation in an ADP-based control scheme.…”
Section: Introductionmentioning
confidence: 99%