2018
DOI: 10.1109/tsmc.2016.2623766
|View full text |Cite
|
Sign up to set email alerts
|

Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Convergence Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
44
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 142 publications
(44 citation statements)
references
References 51 publications
0
44
0
Order By: Relevance
“…However, due to the fact that the nonlinear partial difference equations need to be solved especially for discrete‐time system, the nonlinear Hamilton‐Jacobi‐Bellman (HJB) equation of nonlinear system is more difficult to solve than Riccati equation. Fortunately, adaptive dynamic programming (ADP) as an effective approach to solve nonlinear optimal control problem has received much interest . Not only the stability of system is guaranteed but also the performance index is minimized by ADP.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, due to the fact that the nonlinear partial difference equations need to be solved especially for discrete‐time system, the nonlinear Hamilton‐Jacobi‐Bellman (HJB) equation of nonlinear system is more difficult to solve than Riccati equation. Fortunately, adaptive dynamic programming (ADP) as an effective approach to solve nonlinear optimal control problem has received much interest . Not only the stability of system is guaranteed but also the performance index is minimized by ADP.…”
Section: Introductionmentioning
confidence: 99%
“…Fortunately, adaptive dynamic programming (ADP) as an effective approach to solve nonlinear optimal control problem has received much interest. [12][13][14][15] Not only the stability of system is guaranteed but also the performance index is minimized by ADP. Normally, ADP approaches are classified into several schemes, such as heuristic dynamic Int J Robust Nonlinear Control.…”
mentioning
confidence: 99%
“…Then, several names emerged, eg, approximate dynamic programming and asymptotic dynamic programming. Iterative methods are widely used in ADP to obtain solutions of the Bellman equation indirectly . Adaptive dynamic programming can be divided into many categories, such as heuristic dynamic programming (HDP) dual heuristic dynamic programming (DHP), action dependent DHP (also called Q‐learning), globalized DHP, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Iterative methods are widely used in ADP to obtain solutions of the Bellman equation indirectly. [10][11][12] Adaptive dynamic programming can be divided into many categories, such as heuristic dynamic programming (HDP) [13][14][15] dual heuristic dynamic programming (DHP), 16 action dependent DHP (also called Q-learning 17 ), globalized DHP, 18 and so on. Adaptive dynamic programming has been widely used in real-world applications.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed control scheme combined the backstepping technique, DSC approach, and optimal control theory; thus, computational complexity can be greatly reduced without losing optimal control performance, and the appearance of unknown information will be avoided in the recursive design. Although other authors() also proposed adaptive optimal control methods based on ADP, the aforementioned results are applied only to discrete‐time systems and the optimal control methods() cannot solve the optimal control problem for continuous‐time systems; this paper proposed an adaptive‐dynamic‐programming‐based fuzzy controller that not only can have robustness to unknown time delay for nonlinear continuous‐time systems but also can guarantee the tracking error to an arbitrarily small residual set and achieve optimal control objective.…”
Section: Introductionmentioning
confidence: 99%