2014
DOI: 10.1109/tnnls.2013.2280013
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized Stabilization for a Class of Continuous-Time Nonlinear Interconnected Systems Using Online Learning Optimal Control Approach

Abstract: Abstract-In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuoustime nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
79
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 295 publications
(79 citation statements)
references
References 49 publications
(42 reference statements)
0
79
0
Order By: Relevance
“…The main bottleneck in deploying successful distributed MASs is designing secure control protocols that can learn about system uncertainties while showing some level of functionality in the presence of cyber-physical attacks. Reinforcement learning (RL) [7]- [9], inspired by learning mechanisms observed in mammals, has been successfully used to learn optimal solutions online in single agent systems for both regulation and tracking control problems [10]- [16] and recently for MASs [17]- [19]. Existing RL-based controllers for leader-follower MASs assume that the leader is passive and without any control input.…”
Section: Introductionmentioning
confidence: 99%
“…The main bottleneck in deploying successful distributed MASs is designing secure control protocols that can learn about system uncertainties while showing some level of functionality in the presence of cyber-physical attacks. Reinforcement learning (RL) [7]- [9], inspired by learning mechanisms observed in mammals, has been successfully used to learn optimal solutions online in single agent systems for both regulation and tracking control problems [10]- [16] and recently for MASs [17]- [19]. Existing RL-based controllers for leader-follower MASs assume that the leader is passive and without any control input.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the optimal control problems of nonlinear interconnected systems have got some important results, such as in related works [33][34][35][36][37][38][39][40][41][42] and references herein. Other works [33][34][35] have proposed a decentralized optimal stabilization control strategy for a class of continuous-time nonlinear interconnected large-scale systems. Qu et al 36 developed the state-feedback decentralized tracking problems for a class of nonlinear large-scale interconnected systems.…”
Section: Introductionmentioning
confidence: 99%
“…It is well known that optimal control problems have received increasing attention and some valuable results have been developed (see other works [13][14][15][16][17][18][19][20][21][22][23] and the references therein). In traditional optimal control of nonlinear systems, the Hamilton-Jacobi-Bellman (HJB) equation usually needs to be solved, but obtaining the solution of the HJB equation is challenging and difficult because a closed-form analytical method does not exist in it.…”
Section: Introductionmentioning
confidence: 99%
“…Then, Liu et al 22 extended the SISO nonlinear system to a nonlinear large-scale system and proposed a decentralized optimal control scheme. In addition, the control approaches in the aforementioned works [18][19][20][21][22] can be applied only to nonlinear systems with known functions. Although Yang et al 23 proposed an ADP-based optimal control method for nonlinear systems with unknown functions, this adaptive optimal control method limits the controlled nonlinear systems from satisfying the matching condition [18][19][20][21][22] and other optimal methods, eg, H ∞ control 24 and reinforcement learning.…”
Section: Introductionmentioning
confidence: 99%