2020
DOI: 10.1007/s11432-019-2663-y
|View full text |Cite
|
Sign up to set email alerts
|

Event-triggered receding horizon control via actor-critic design

Abstract: In this paper, we propose a novel event-triggered near-optimal control for nonlinear continuoustime systems. The receding horizon principle is utilized to improve the system robustness and obtain better dynamic control performance. In the proposed structure, we first decompose the infinite horizon optimal control into a series of finite horizon optimal problems. Then a learning strategy is adopted, in which an actor network is employed to approximate the cost function and an critic network is used to learn the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…Remark Different from the existing optimal methods, such as References 15‐23,44‐47, the updating laws of actor and critic NN are obtained with complex calculation through the stability analysis and the gradient descent algorithm of Bellman error square. However, this article utilizes a simplified RL algorithm, where the actor and critic updating laws can be constructed based on the uniqueness of the optimal solution.…”
Section: Optimized Control Design and Stability Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…Remark Different from the existing optimal methods, such as References 15‐23,44‐47, the updating laws of actor and critic NN are obtained with complex calculation through the stability analysis and the gradient descent algorithm of Bellman error square. However, this article utilizes a simplified RL algorithm, where the actor and critic updating laws can be constructed based on the uniqueness of the optimal solution.…”
Section: Optimized Control Design and Stability Analysismentioning
confidence: 99%
“…For the past few years, as one of the most effective and popular online learning methods, actor-critic RL was extensively applied in the research field of optimal control. [15][16][17][18][19][20][21][22][23] By utilizing a modified cost function, an optimal robust control strategy was proposed for a class of uncertain nonlinear systems. 19 In Reference 22, based on RL algorithm and improved Hamilton-Jacobi-Bellman (HJB) function, the finite-time optimal control problem was addressed for uncertain nonlinear systems with dead-zone input.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Based on the method given in [4][5][6], the neural network approximation technology and the disturbance observer method can be used to estimate f (x(t)) − fr(xr(t)) and D(t) with bounded estimation errors. The estimation of unknown nonlinear function is defined as Ŵaσ(V x).…”
Section: Dear Editormentioning
confidence: 99%