2020
DOI: 10.1103/physrevx.10.031056
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Interpretable Physical Parameters from Spatiotemporal Systems Using Unsupervised Learning

Abstract: Experimental data are often affected by uncontrolled variables that make analysis and interpretation difficult. For spatiotemporal systems, this problem is further exacerbated by their intricate dynamics. Modern machine learning methods are particularly well suited for analyzing and modeling complex datasets, but to be effective in science, the result needs to be interpretable. We demonstrate an unsupervised learning technique for extracting interpretable physical parameters from noisy spatiotemporal data and … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 51 publications
(43 citation statements)
references
References 42 publications
0
38
0
Order By: Relevance
“…For example, Zheng et al [11] used multilayer perceptrons to extract relevant properties from a system of bouncing balls (such as the mass of the balls or the spring constant of a force between the balls) and simultaneously predicted the trajectory of a different set of objects. Lu et al [12] accomplished a similar goal but using a dynamics encoder (DE) with convolutional layers and a propagating decoder (PD) with deconvolutional layers Here, we present a deep learning architecture shown in Fig. 4, which is based on the DE-PD architecture.…”
Section: Dynamical System Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…For example, Zheng et al [11] used multilayer perceptrons to extract relevant properties from a system of bouncing balls (such as the mass of the balls or the spring constant of a force between the balls) and simultaneously predicted the trajectory of a different set of objects. Lu et al [12] accomplished a similar goal but using a dynamics encoder (DE) with convolutional layers and a propagating decoder (PD) with deconvolutional layers Here, we present a deep learning architecture shown in Fig. 4, which is based on the DE-PD architecture.…”
Section: Dynamical System Analysismentioning
confidence: 99%
“…The DE takes in the full input series {x t } T x t=0 over T x time steps and outputs a single-dimensional latent variable z. Unlike the original DE-PD architecture presented in [12], the DE here is not a VAE. The DE here consists of several convolutional layers followed by fully connected layers and a batch normalization layer.…”
Section: Dynamical System Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…As another example, the so-called β-VAE [62] introduces additional constraints to enforce orthogonality and sparsity on the latent space, so that the dimensions are uncorrelated and the VAE will only use the minimum number of dimensions required for reconstruction of the data. This kind of β-VAE type approach was recently shown to extract parameters that are interpretable as the driving parameters of ordinary differential equations from data of dynamic processes [63].…”
Section: Physical Knowledge Beyond Model Explanationsmentioning
confidence: 99%
“…In recent years, there has been increasing interest and rapid advances in applying data-driven approaches, in particular deep learning via neural networks, to problems in the natural sciences [1][2][3][4]. Unlike traditional physics-informed approaches, deep learning relies on extensive amounts of data to quantitatively discover hidden patterns and correlations to perform tasks such as predictive modelling [4,5], property optimization [6,7] and knowledge discovery [8,9]. Its success is thus largely contingent on the amount of data available and a lack of sufficient data can severely impair model accuracy.…”
Section: Introductionmentioning
confidence: 99%