2008
DOI: 10.1016/j.automatica.2007.10.018
|View full text |Cite
|
Sign up to set email alerts
|

On the infinite time solution to state-constrained stochastic optimal control problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
1

Year Published

2008
2008
2019
2019

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 6 publications
0
15
1
Order By: Relevance
“…Additionally, we require both W and R to be positive definite and bounded everywhere on Ω, but otherwise impose no restrictions on them. Contrary to the assumptions in previous work [9,10,14] and the work of Kappen [7] and Broek et al [12] they are no longer required to relate to the inverse of eachother. As formulated, the control u and the noise w enters the state equation via the same matrix G. However, the problem can easily be reformulated such that the control and noise enter via different matrices as long as they have the same column space [14].…”
Section: Problem Formulationcontrasting
confidence: 40%
See 3 more Smart Citations
“…Additionally, we require both W and R to be positive definite and bounded everywhere on Ω, but otherwise impose no restrictions on them. Contrary to the assumptions in previous work [9,10,14] and the work of Kappen [7] and Broek et al [12] they are no longer required to relate to the inverse of eachother. As formulated, the control u and the noise w enters the state equation via the same matrix G. However, the problem can easily be reformulated such that the control and noise enter via different matrices as long as they have the same column space [14].…”
Section: Problem Formulationcontrasting
confidence: 40%
“…The nonlinear optimization solver minimizes φ at the same time as it solves (16), (14) and (10). As a starting guess for K we can use (17), and as a starting guess for Z we can solve (14) for ρ = 0, which is a linear eigenvalue problem given our starting guess for K.…”
Section: Solving the Problemmentioning
confidence: 99%
See 2 more Smart Citations
“…Dealing with stochastic systems, such investigation has been usually carried out in the framework of a complete knowledge of the state of the system (i.e., the state is directly available with no need for outputs providing possibly noisy/incomplete measurements of the state). Usual difficulties involve the solution of the Hamilton-Jacobi-Bellman (HJB) equations associated to the optimal control problem: in [5], the stochastic HJB equation is iteratively solved with successive approximations; in [6], the infinitetime HJB equation is reformulated as an eigenvalue problem; in [7], a transformation approach is proposed for solving the HJB equation arising in quadratic-cost control for nonlinear deterministic and stochastic systems. Finally, in a pair of recent papers, a solution to the nonlinear HJB equation is provided, by expressing it in the form of decoupled Forward and Backward Stochastic Differential Equations (FBSDEs), for an L 2 -and an L 1 -type optimal control setting (see [8,9], respectively).…”
Section: Introductionmentioning
confidence: 99%