2022
DOI: 10.1109/tnnls.2021.3055761
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Tracking Control of Nonlinear Multiagent Systems Using Internal Reinforce Q-Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
23
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 71 publications
(23 citation statements)
references
References 54 publications
0
23
0
Order By: Relevance
“…Hitherto, neural networks (NNs), as an important branch in the advance of artificial intelligent, have been attracting ever‐increasing research interest within the systems science and control communicates in previous decades owning primarily to its numerous applications, such as image encryption, 1,2 reinforcement learning, 3,4 smart antenna arrays, gramophone noise detection and reconstruction, secure communication, 5 and other aspects, which attribute to their powerful computing abilities and physical realizations. Actually, those practical applications extremely depend on its dynamic behaviors of the underlying NNs.…”
Section: Introductionmentioning
confidence: 99%
“…Hitherto, neural networks (NNs), as an important branch in the advance of artificial intelligent, have been attracting ever‐increasing research interest within the systems science and control communicates in previous decades owning primarily to its numerous applications, such as image encryption, 1,2 reinforcement learning, 3,4 smart antenna arrays, gramophone noise detection and reconstruction, secure communication, 5 and other aspects, which attribute to their powerful computing abilities and physical realizations. Actually, those practical applications extremely depend on its dynamic behaviors of the underlying NNs.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, several robust control techniques based on an observer were proposed in [24][25][26][27]. Recent methods proposed novel formation tracking control with good results, such as finite-time criteria design [28][29][30], optimal design with Q-learning [31], consensus design [32], command filtered backstepping [33], the adaptive backstepping technique [34], robust sliding mode with fixed-time stability [35], data driven-based formation control [36], output feedback formation [37], adaptive nonsingular terminal sliding mode [38], and neural networks [39]. However, most studies mentioned above lack experimental results or asymptotical convergence in quadcopter systems.…”
Section: Introductionmentioning
confidence: 99%
“…For the past few years, as one of the most effective and popular online learning methods, actor-critic RL was extensively applied in the research field of optimal control. [15][16][17][18][19][20][21][22][23] By utilizing a modified cost function, an optimal robust control strategy was proposed for a class of uncertain nonlinear systems. 19 In Reference 22, based on RL algorithm and improved Hamilton-Jacobi-Bellman (HJB) function, the finite-time optimal control problem was addressed for uncertain nonlinear systems with dead-zone input.…”
Section: Introductionmentioning
confidence: 99%