2003
DOI: 10.1016/s0005-1098(03)00238-3
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic optimal control via Bellman's principle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 71 publications
(32 citation statements)
references
References 17 publications
(25 reference statements)
0
32
0
Order By: Relevance
“…The saturated optimal control is determined by the minimization of the right hand side of the HJB equation (18) or (19) …”
Section: Dynamical Programming Equation and Saturated Optimal Comentioning
confidence: 99%
See 1 more Smart Citation
“…The saturated optimal control is determined by the minimization of the right hand side of the HJB equation (18) or (19) …”
Section: Dynamical Programming Equation and Saturated Optimal Comentioning
confidence: 99%
“…The strategy has been applied to minimize the response [12] , the feedback stabilization [13] and the feedback maximization of the reliability [14,15] of quasi Hamiltonian systems. At the same time, the cell mapping method is applied by Sun and Crespo [16][17][18] to solve the dynamical programming equation in the stochastic optimal control of nonlinear systems, while a hybrid solution method was proposed by Bratus and Dimentberg [19][20][21] to solve the dynamical programming equation for the bounded control of linear systems subject to external excitations of Gaussian white noise.…”
Section: Introductionmentioning
confidence: 99%
“…(15) or Eq. (22) and completing the averaging, the following fully averaged Itô equations are obtained:…”
Section: Response Of Optimally Controlled Systemsmentioning
confidence: 99%
“…Thus, this control strategy is very promising and deserves further development. At the same time, the cell mapping method is applied by Crespo and Sun [20][21][22] to solve the dynamical programming equation in the stochastic optimal control of nonlinear systems and a hybrid solution method was proposed by Bratus and Dimentberg, etc. [23][24][25] to solve the dynamic programming equation for the bounded control of linear systems subject to external excitations of Gaussion white noise.…”
Section: Introductionmentioning
confidence: 99%
“…29 The stochastic optimal controls for linear and nonlinear systems have been studied and many control strategies have been presented. [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46] However, the stochastic optimal control for a nonlinear system with a noised observation was only considered in several studies. 43 Under a specified condition, the separation theorem was applied to convert the nonlinear stochastic system with a noised observation into a completely observable linear system for determining optimal control, but the application is strongly limited.…”
Section: Introductionmentioning
confidence: 99%