2017
DOI: 10.1109/tnnls.2016.2539366
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Optimal Control of Affine Nonlinear Continuous-Time Systems Using Event-Sampled Neurodynamic Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
25
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 60 publications
(25 citation statements)
references
References 21 publications
0
25
0
Order By: Relevance
“…Note that (x k ) is actually a discretized value of (x). Moreover, as illustrated in other works, [22][23][24][25][26][27] the time-triggered HJB equation lays a foundation for developing the event-triggered HJB equation. In what follows, we recall the time-triggered HJB equation for the auxiliary system.…”
Section: Event-triggered Robust Control Strategymentioning
confidence: 91%
See 2 more Smart Citations
“…Note that (x k ) is actually a discretized value of (x). Moreover, as illustrated in other works, [22][23][24][25][26][27] the time-triggered HJB equation lays a foundation for developing the event-triggered HJB equation. In what follows, we recall the time-triggered HJB equation for the auxiliary system.…”
Section: Event-triggered Robust Control Strategymentioning
confidence: 91%
“…Remark 3. The triggering instant t k can be obtained via the triggering condition (24). Thus, the minimal intersample time (Δt k ) min is available (note: Δt k = t k+1 − t k , ie, k ∈ N).…”
Section: Event-triggered Robust Controller Design Via Solving the Evementioning
confidence: 99%
See 1 more Smart Citation
“…Due to this characteristic of ADP, the phenomenon of 'the curse of dimensionality' is overcome [12]. There are many synonyms proposed for ADP, such as 'ADP' [13][14][15][16][17][18][19], 'approximate dynamic programming' [20][21][22], 'adaptive critic designs' [23], 'neurodynamic programming' [24,25], and 'reinforcement learning (RL)' [26,27].…”
Section: Introductionmentioning
confidence: 99%
“…During the last decades, many researchers have made much efforts to overcome the problem of "curse of dimensionality". Based on dynamic programming method, a large number of reinforcement learning (RL) methods such as approximate dynamic programming (ADP), Neuro-dynamic programming (NDP) and adaptive dynamic programming (ADP) have thus been proposed to deal well with the problem of "curse of dimensionality" problem [36,11,8,30,41,45,40,23,38,2,3]. In the process of various RL methods emerged, optimal control also realizes the transition from model-based reinforcement learning to model-free reinforcement learning.…”
mentioning
confidence: 99%