2015
DOI: 10.1109/tac.2015.2406976
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of an Upwind Finite-Difference Scheme for Hamilton–Jacobi–Bellman Equation in Optimal Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
21
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 17 publications
0
21
0
Order By: Relevance
“…There is no doubt that the feedback control of dynamic systems has many merits compared with open-loop control (Guo and Sun, 2009; Sun and Guo, 2015). However, an undeniable fact is that the latter, open-loop control, has its own advantages in the investigation of infinite-dimensional systems, such as the efficiency and accuracy of open-loop control algorithms, as well as the robustness aspect of investigational systems (Datko, 1988; Sloss et al, 1998).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There is no doubt that the feedback control of dynamic systems has many merits compared with open-loop control (Guo and Sun, 2009; Sun and Guo, 2015). However, an undeniable fact is that the latter, open-loop control, has its own advantages in the investigation of infinite-dimensional systems, such as the efficiency and accuracy of open-loop control algorithms, as well as the robustness aspect of investigational systems (Datko, 1988; Sloss et al, 1998).…”
Section: Introductionmentioning
confidence: 99%
“…However, an undeniable fact is that the latter, open-loop control, has its own advantages in the investigation of infinite-dimensional systems, such as the efficiency and accuracy of open-loop control algorithms, as well as the robustness aspect of investigational systems (Datko, 1988; Sloss et al, 1998). As for the dynamic programming-viscosity solution approach, it is based on numerically solving the corresponding Hamilton–Jacobi–Bellman equation for optimal control problems (Sun and Guo, 2015). Most of these algorithms suffer from the so-called ‘curse’ of dimensionality (McEneaney, 2006) and are thus confined to only ‘toy’ problems of low dimension.…”
Section: Introductionmentioning
confidence: 99%
“…The methods for numerical solution of HJB are currently developing . These methods are conceptually related to model‐based reinforcement learning methods, ie, value iteration and policy iteration .…”
Section: Introductionmentioning
confidence: 99%
“…The methods for numerical solution of HJB are currently developing. 34 These methods are conceptually related to model-based reinforcement learning methods, ie, value iteration and policy iteration. 35 As one of the popular methods, policy iteration manipulates the policy directly rather than finding it indirectly via value iteration.…”
Section: Introductionmentioning
confidence: 99%
“…The upwind finite-difference scheme is such a well-adapted algorithm and has been successfully applied to many examples ( [13,14,15]). And most of all, its convergence has been rigorously proven in [28].…”
mentioning
confidence: 99%