2021
DOI: 10.1016/j.jcp.2021.110219
|View full text |Cite
|
Sign up to set email alerts
|

Data-driven discovery of coarse-grained equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(20 citation statements)
references
References 45 publications
0
15
0
Order By: Relevance
“…When the potential terms of the differential equations we seek are not known a priori, or if they include integral operators (e.g. non-local closures [82]), a naive implementation of SINDy gives sub-optimal results. A particularly versatile solution is to use autoencoders to transform the coordinates of the input variable before discovering a differential equation, as in Champion et al [36].…”
Section: Introductionmentioning
confidence: 99%
“…When the potential terms of the differential equations we seek are not known a priori, or if they include integral operators (e.g. non-local closures [82]), a naive implementation of SINDy gives sub-optimal results. A particularly versatile solution is to use autoencoders to transform the coordinates of the input variable before discovering a differential equation, as in Champion et al [36].…”
Section: Introductionmentioning
confidence: 99%
“…The number of different approaches pursued here is vast. It includes the use of simple neural networks [12], recurrent neural networks [13], deep neural networks with dimension reduction via diffusion maps [14], recurrent neural network architectures with dimension reduction accomplished via an autoencoder scheme [15], multilayer perceptrons for generating reduced-order metamodels [2], and direct / constrained equation learning methods [16]. Because these models are designed to learn the appropriate macroscale behavior from an appropriate suite of ML approaches, the end result is frequently high-fidelity predictive ability from the generated network, but usually without the generation of an explicit macroscale equation for the process.…”
Section: Introductionmentioning
confidence: 99%
“…A notable development in this pursuit is the sparse identification of nonlinear dynamics (SINDy) algorithm ( [3]), a general framework for discovering dynamical systems using sparse regression. Since its inception in the context of autonomous ordinary differential equations (ODEs), SINDy has been extended to autonomous partial differential equations (PDEs) ( [28,30]), stochastic differential equations (SDEs) ( [2]), non-autonomous systems ( [27]), and coarse-grained equations ( [1]), to name a few. A significant challenge in using SINDy to solve real-world problems is the computation of derivatives from noisy data.…”
mentioning
confidence: 99%
“…Within the last few years, the consensus has emerged that weak-form SINDy (WSINDy, see [23,24,22]), where integration against test functions replaces numerical differentiation, is a powerful method that is significantly more robust to noisy data, particularly in the context of PDEs. Furthermore, WSINDy's efficient convolutional formulation makes it a viable method for identifying PDEs under the constraints of limited memory capacity and computing power that exist in the online setting 1 .…”
mentioning
confidence: 99%