2021
DOI: 10.48550/arxiv.2106.01510
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Kernel Learning for Robust Dynamic Mode Decomposition: Linear and Nonlinear Disambiguation Optimization (LANDO)

Peter J. Baddoo,
Benjamin Herrmann,
Beverley J. McKeon
et al.

Abstract: Research in modern data-driven dynamical systems is typically focused on the three key challenges of high dimensionality, unknown dynamics, and nonlinearity. The dynamic mode decomposition (DMD) has emerged as a cornerstone for modeling high-dimensional systems from data. However, the quality of the linear DMD model is known to be fragile with respect to strong nonlinearity, which contaminates the model estimate. In contrast, sparse identification of nonlinear dynamics (SINDy) learns fully nonlinear models, di… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 107 publications
0
4
0
Order By: Relevance
“…One limitation of the method is that when the duration of the brain network activity is short and window-based approaches miss capturing them. In the future, we intend to solve these problems using an incremental version of the methodology using incremental DMD and instantaneous dFCs and constrained dictionary learning approaches such as LANDO [37] to enhance resolution and extract all the dynamics in the data. The methodology described in this paper was applied to cognitive tasks, however, it would be interesting to apply the pipeline to resting-state experiments during which the brain dynamically evolves by shifting from one brain configuration to the other [7].…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…One limitation of the method is that when the duration of the brain network activity is short and window-based approaches miss capturing them. In the future, we intend to solve these problems using an incremental version of the methodology using incremental DMD and instantaneous dFCs and constrained dictionary learning approaches such as LANDO [37] to enhance resolution and extract all the dynamics in the data. The methodology described in this paper was applied to cognitive tasks, however, it would be interesting to apply the pipeline to resting-state experiments during which the brain dynamically evolves by shifting from one brain configuration to the other [7].…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Given a Koopman operator K of a measure-preserving dynamical system, we use the concept of unitary extension to formally construct a related normal operator K . That is, suppose that K : L 2 (Ω, ω) → L 2 (Ω, ω) is an isometry, then there exists a unitary extension K defined on an extended Hilbert space H with L 2 (Ω, ω) ⊂ H [68, Proposition I.2.3], 6 and unitary operators are normal. Even though such an extension is not unique, it allows us to understand the spectral information of K by considering K .…”
Section: Unitary Extensions Of Isometriesmentioning
confidence: 99%
“…However, for larger dimensions that arise in applications such as fluid dynamics, molecular dynamics, and climate forecasting, it is not possible to explicitly store a dictionary. DMD is a popular approach to studying Koopman operators associated with high-dimensional dynamics, which can yield accurate results for periodic or quasiperiodic systems but is often unable to adequately capture relevant nonlinear dynamics [6,19,110] as it implicitly seeks to fit linear dynamics. Moreover, a rigorous connection with Galerkin approximations of Koopman operators does not always hold [105].…”
Section: High-dimensional Dynamical Systemsmentioning
confidence: 99%
“…Recent successful examples of ML algorithms that have been modified to respect physical principles include neural networks [2][3][4][5][6][7][8][9][10], kernel methods [11,12], deep generative models [13], and sparse regression [14][15][16][17][18]. These examples demonstrate that incorporating partially-known physical principles into machine learning architectures can increase the accuracy, robustness, and generalizability of the resulting models, while simultaneously decreasing the required training data.…”
Section: Introductionmentioning
confidence: 99%