2021
DOI: 10.48550/arxiv.2112.04085
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Diffeomorphically Learning Stable Koopman Operators

Abstract: We propose a novel framework for constructing linear time-invariant (LTI) models for datadriven representations of the Koopman operator for a class of stable nonlinear dynamics. The Koopman operator (generator) lifts a finite-dimensional nonlinear system to a possibly infinite-dimensional linear feature space. To utilize it for modeling, one needs to discover finite-dimensional representations of the Koopman operator. Learning suitable features is challenging, as one needs to learn LTI features that are both K… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…where in the third and fourth equalities we have used the definition of v +s and v −s and their orthogonality. Now, for the function g * (•) = D(•)r * ∈ span(D), one can use (17) to see that RRMSE g * = s. Hence, the equality in (19) holds, and this concludes the proof. Remark V.4: (Working with Consistency Matrix is More Efficient than the Difference of Projections): According to Theorems V.1 and V.3, one can use the consistency matrix M C ∈ R N d ×N d or the difference of projections D(Y )D(Y ) † − D(X)D(X) † ∈ R N ×N interchangeably to compute the relative root mean square error.…”
Section: Consistency Index Determines Edmd's Prediction Accuracy On Datamentioning
confidence: 62%
See 2 more Smart Citations
“…where in the third and fourth equalities we have used the definition of v +s and v −s and their orthogonality. Now, for the function g * (•) = D(•)r * ∈ span(D), one can use (17) to see that RRMSE g * = s. Hence, the equality in (19) holds, and this concludes the proof. Remark V.4: (Working with Consistency Matrix is More Efficient than the Difference of Projections): According to Theorems V.1 and V.3, one can use the consistency matrix M C ∈ R N d ×N d or the difference of projections D(Y )D(Y ) † − D(X)D(X) † ∈ R N ×N interchangeably to compute the relative root mean square error.…”
Section: Consistency Index Determines Edmd's Prediction Accuracy On Datamentioning
confidence: 62%
“…Let v * be such that p * = D(Y )v * . Using (18) for v * instead of v f , and the properties of p * , one can write (17) to see that RRMSE f * = 1 = s. Hence, equality holds in (19).…”
Section: Consistency Index Determines Edmd's Prediction Accuracy On Datamentioning
confidence: 99%
See 1 more Smart Citation
“…In deep learning, an encoder can be trained to map the observations to the latent state that follows an ODE (Rubanova et al, 2019;Doyeon et al, 2021;de Brouwer et al, 2019). The particular case in which this ODE is linear and evolves according to the Koopman operator (that can be jointly approximated) is investigated in Lusch et al (2018); Bevanda et al (2021). However, little insight on the desired latent representation is usually provided.…”
Section: Partial Observationsmentioning
confidence: 99%
“…In order to address this, some recent work [7][8][9][10][11][12][13][14] have introduced deep neural networks (DNN) into EDMD to approximate time-invariant systems, which utilize DNN as the observable function for Koopman operator. In these papers, the DNN observable function is tuned with respect to the collected data of state-control pairs by minimizing a properly defined loss function.…”
Section: Introductionmentioning
confidence: 99%