2020
DOI: 10.48550/arxiv.2011.04336
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sparsely constrained neural networks for model discovery of PDEs

Abstract: Sparse regression on a library of candidate features has developed as the prime method to discover the PDE underlying a spatio-temporal dataset. As these features consist of higher order derivatives, model discovery is typically limited to low-noise and dense datasets due to the erros inherent to numerical differentiation. Neural network-based approaches circumvent this limit, but to date have ignored advances in sparse regression algorithms. In this paper we present a modular framework that combines deep-lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…The Kuramoto-Shivashinksy equation describes flame propagation and is given by u t = −uu x − u xx − u xxxx . The fourth order derivative makes it challenging to learn with numerical differentiation-based methods, while its periodic and chaotic nature makes it challenging to learn with neural network based methods [8]. We show here that using the SBL-constrained approach we discover the KS-equation from only a small slice of the chaotic data (256 in space, 25 time steps), with 20% additive noise.…”
Section: Kuramoto-shivashinskymentioning
confidence: 88%
See 2 more Smart Citations
“…The Kuramoto-Shivashinksy equation describes flame propagation and is given by u t = −uu x − u xx − u xxxx . The fourth order derivative makes it challenging to learn with numerical differentiation-based methods, while its periodic and chaotic nature makes it challenging to learn with neural network based methods [8]. We show here that using the SBL-constrained approach we discover the KS-equation from only a small slice of the chaotic data (256 in space, 25 time steps), with 20% additive noise.…”
Section: Kuramoto-shivashinskymentioning
confidence: 88%
“…The mask M describes which terms make up the equation, and hence the form of the constraint; compared to PINNs, the mask M which defines the form of the constraint, is also learned. This mask is updated periodically by some sparse regression technique, and as terms are pruned, the constraint becomes stricter, preventing overfitting of the constraint itself and improving the approximation of the network, boosting performance significantly [8]. However, the non-differentiability of the mask can lead to issues during training, for example when it is updated at the wrong time, or when the wrong terms are accidentally pruned.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We can therefore expect the deterministic noise δ to be much smaller. To leverage such capability, we implement the adaptive Lasso with stability selection and error control in the deep learning model discovery framework DeepMod 4 , [20]. The framework combines a function approximator of u, typically a deep neural network which is trained with the following loss,…”
Section: Within a Deep Learning Framework To Reduce The Deterministic...mentioning
confidence: 99%
“…As pioneering researchers in sparse PDE learning, Rudy et al [1,9] modified the ridge regression method by imposing hard thresholding which recursively eliminates certain terms with coefficient values below a learned threshold. As pointed out in Limitations of [1,9] (Section 4 in Supplementary Materials) and following studies [4,10,11], the identification quality is very sensitive to data quantity and quality. For example, the terms of the reaction diffusion equation cannot be correctly identified using the data with only 0.5% random noise.…”
Section: Introductionmentioning
confidence: 99%