“…However, while the policy in (2.42) leads to a non-convex control optimization problem in variables v(t + i|t) and K j (t + i|t), i = 0, .., N − 1, the disturbance-based policy (2.41), which parameterizes the control inputs as affine functions of only the uncertain quantities w(t + i), i = 1, .., N − 1, allows a convex optimization of the control inputs. The technique of optimizing the 'adjustable' decision variables parameterized as affine functions of uncertain parameters of the optimization problem has been explored in more general forms in the context of using adjustable robust counterparts of uncertain problems in the optimization literature (see, e.g., [118][119][120]). While the approach of optimizing future inputs or input perturbations as affine functions of previous disturbances gives less conservative results through the solution of an elegantly tractable problem, the on-line computational complexity is significantly high -at least of the order O(N 3 ) when solving a relevant quadratic optimization problem with the perturbations v(t + i|t) and gainsK j (t + i|t), i = 0, .., N − 1, j = 0, .., i − 1 as variables [121].…”