2020
DOI: 10.1007/978-3-030-61725-7_25
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Regression and Adaptive Feature Generation for the Discovery of Dynamical Systems

Abstract: We study the performance of sparse regression methods and propose new techniques to distill the governing equations of dynamical systems from data. We first look at the generic methodology of learning interpretable equation forms from data, proposed by Brunton et al. [3], followed by performance of LASSO for this purpose. We then propose a new algorithm that uses the dual of LASSO optimization for higher accuracy and stability. In the second part, we propose a novel algorithm that learns the candidate function… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…In practice, the value of λ may need to be adjusted in different systems and can be determined considering the expected sparsity level of Ξ ab . Besides, the sparse promoting process for the thresholded Least absolute Shrinkage and Selection Operator (thresholded LASSO) [39] can also be used to promote the non-zero coefficients in the proposed algorithm without the need of selecting λ.…”
Section: Remarksmentioning
confidence: 99%
“…In practice, the value of λ may need to be adjusted in different systems and can be determined considering the expected sparsity level of Ξ ab . Besides, the sparse promoting process for the thresholded Least absolute Shrinkage and Selection Operator (thresholded LASSO) [39] can also be used to promote the non-zero coefficients in the proposed algorithm without the need of selecting λ.…”
Section: Remarksmentioning
confidence: 99%
“…Such a requirement on the training data can be a luxury in a lot of scenarios. The requirement of very frequent snapshot data of the system is also true for methods which achieve model discovery using sparse-regression and provide interpretable learned models [8,18,19]. All of the above issues are addressed by using neural ordinary differential equations (nODEs; [20]) and some researchers recently used nODEs for closure modelling.…”
Section: Introductionmentioning
confidence: 99%
“…Such requirement on the training data might be a luxury in a lot of scenarios. The requirement of very frequent snapshot data of the system is also true for methods which achieve model discovery using sparse-regression and provide interpretable learned models [12,42]. These issues are addressed by using neural ordinary differential equations (nODEs; [16]) and some researchers recently used nODEs for closure modeling.…”
Section: Introductionmentioning
confidence: 99%