2021
DOI: 10.48550/arxiv.2110.06509
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Stable Koopman Embeddings

Abstract: In this paper, we present a new data-driven method for learning stable models of nonlinear systems. Our model lifts the original state space to a higher-dimensional linear manifold using Koopman embeddings. Interestingly, we prove that every discrete-time nonlinear contracting model can be learnt in our framework. Another significant merit of the proposed approach is that it allows for unconstrained optimization over the Koopman embedding and operator jointly while enforcing stability of the model, via a direc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…In Section 5.6.1 we show that in our case the eigenvector r T for the dominant eigenvalue λ 1 of the Koopman operator, restricted to the transition set (called A F above) also encodes the escape times from F . This implies that other approaches to time series analysis that approximate the dominant mode of the Koopman operator should also be able to determine in which parts of the reconstructed phase space the system are most susceptible to freezing (escape from stepping) [5].…”
Section: Preferred Transition Statesmentioning
confidence: 99%
“…In Section 5.6.1 we show that in our case the eigenvector r T for the dominant eigenvalue λ 1 of the Koopman operator, restricted to the transition set (called A F above) also encodes the escape times from F . This implies that other approaches to time series analysis that approximate the dominant mode of the Koopman operator should also be able to determine in which parts of the reconstructed phase space the system are most susceptible to freezing (escape from stepping) [5].…”
Section: Preferred Transition Statesmentioning
confidence: 99%
“…However, the learned Koopman operator is not guaranteed to be stable. [24] proposes a method to guarantee stability by learning a stable Koopman operator by utilizing a particular operator parameterization that ensures that the computed Koopman operator is Schur stable. Further, [25,26,27] propose data-driven approaches for identification of Koopman invariant subspaces whose applicability is, however, limited to uncontrolled nonlinear systems.…”
Section: Introductionmentioning
confidence: 99%
“…The works in [16], [17] use methods based on neural networks for this task, while [18] directly learns the Koopman eigenfunctions spanning invariant subspaces. Moreover, the works in [19], [20] approximate finitedimensional Koopman models relying on knowledge about the system's attractors and their stability. In our previous work, we have provided efficient algebraic algorithms to identify exact Koopman-invariant subspaces [21], [22] or approximate them with tunable predefined accuracy [23].…”
Section: Introductionmentioning
confidence: 99%