2015
DOI: 10.1016/j.jprocont.2015.06.011
|View full text |Cite
|
Sign up to set email alerts
|

Approximate robust optimization of nonlinear systems under parametric uncertainty and process noise

Abstract: a b s t r a c tDynamic optimization techniques for complex nonlinear systems can provide the process industry with sustainable and efficient operating regimes. The problem with these regimes is that they usually lie close to the limits of the process. It is therefore paramount that these operating conditions are robust with respect to the parameter uncertainties and to the process noise such that critical constraints are not violated. Besides the uncertainty in the constraints, also the uncertainty in the obje… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 57 publications
(37 citation statements)
references
References 45 publications
0
37
0
Order By: Relevance
“…where E [ c i ] is the expected value on the constraint function c i , Var [ c i ] is the variance on the constraint function c i , and α ci is a backoff parameter for c i . This backoff parameter can be chosen based on probabilistic inequalities as, e.g., Cantelli‐Chebyshev's inequality or following the procedure presented in .…”
Section: Methodsmentioning
confidence: 99%
“…where E [ c i ] is the expected value on the constraint function c i , Var [ c i ] is the variance on the constraint function c i , and α ci is a backoff parameter for c i . This backoff parameter can be chosen based on probabilistic inequalities as, e.g., Cantelli‐Chebyshev's inequality or following the procedure presented in .…”
Section: Methodsmentioning
confidence: 99%
“…As discussed in Section 1, the linearization and the UT methods are the most commonly used deterministic approaches to uncertainty propagation in stochastic optimal control. 12,15 Linearization is usually computationally inexpensive but requires the existence and calculation of the Jacobians, which can be cumbersome for complex model functions. On the other hand, the UT method propagates a set of heuristically chosen sigma points through the system dynamics to circumvent the inaccuracies associated with linearization.…”
Section: Problem Statementmentioning
confidence: 99%
“…However, the UT method can be expensive for online optimization, as the sigma points must be adapted at each optimization step for any candidate input profile. 15 Furthermore, error bounds and convergence results are not readily available for the UT method. Alternatively, random sampling methods look to replace the probabilistic operators E[·], Var(·), and P(·) with their corresponding sample approximations.…”
Section: Problem Statementmentioning
confidence: 99%
See 2 more Smart Citations