2022
DOI: 10.48550/arxiv.2205.10740
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exact SDP Formulation for Discrete-Time Covariance Steering with Wasserstein Terminal Cost

Abstract: In this paper, we present new results on the covariance steering problem with Wasserstein distance terminal cost. We show that the state history feedback control policy parametrization, which has been used before to solve this class of problems, requires an unnecessarily large number of variables and can be replaced by a randomized state feedback policy which leads to more tractable problem formulations without any performance loss. In particular, we show that under the latter policy, the problem can be equiva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…One of the reasons for the above approach is that with chance constraints present on the sample paths of the input and the state, the state-mean and state-covariance constraints are coupled, which makes it difficult to find the optimal control analytically [7], [8], [10]. When there are no chance constraints, the desired terminal covariance can be replaced with a soft constraint on the Wasserstein distance between the desired and the actual terminal Gaussian distributions, which can be solved using a randomized state feedback control in terms of a (convex) semi-definite programming (SDP) [12]. Finite-horizon covariance control has also been applied to a model-predictive-control setting [13], [14], in which at each time step k, an optimal covariance steering problem is solved in a receding-horizon fashion.…”
Section: Introductionmentioning
confidence: 99%
“…One of the reasons for the above approach is that with chance constraints present on the sample paths of the input and the state, the state-mean and state-covariance constraints are coupled, which makes it difficult to find the optimal control analytically [7], [8], [10]. When there are no chance constraints, the desired terminal covariance can be replaced with a soft constraint on the Wasserstein distance between the desired and the actual terminal Gaussian distributions, which can be solved using a randomized state feedback control in terms of a (convex) semi-definite programming (SDP) [12]. Finite-horizon covariance control has also been applied to a model-predictive-control setting [13], [14], in which at each time step k, an optimal covariance steering problem is solved in a receding-horizon fashion.…”
Section: Introductionmentioning
confidence: 99%
“…Remark. A different approach that results in the same formulation is that of a randomized feedback control policy presented in [21]. Therein, the injected randomness on the control policy can be interpreted as a slack variable converting (8b) to equality.…”
Section: Unconstrained Covariance Steeringmentioning
confidence: 99%
“…Although the solution of the problem instance in Example 1 does not correspond to an affine state feedback policy since M k = L k Σ −1 k L T k is not satisfied, the mean and the covariance of the state and the control processes which can be found by solving (16) can still be realized by considering randomized affine state feedback policies as in [9].…”
Section: B Exact Covariance Steeringmentioning
confidence: 99%
“…Despite the fact that deterministic affine state feedback policies are sufficient for CS problems for systems excited by additive noise [2], [3], [9], Example 1 shows that the optimal policy for exact CS problem 1 may require randomized policies for systems excited by multiplicative noise.…”
Section: B Exact Covariance Steeringmentioning
confidence: 99%
See 1 more Smart Citation