2018
DOI: 10.1103/physreve.97.062412
|View full text |Cite
|
Sign up to set email alerts
|

Variational encoding of complex dynamics

Abstract: Often the analysis of time-dependent chemical and biophysical systems produces high-dimensional time-series data for which it can be difficult to interpret which individual features are most salient. While recent work from our group and others has demonstrated the utility of time-lagged covariate models to study such systems, linearity assumptions can limit the compression of inherently nonlinear dynamics into just a few characteristic components. Recent work in the field of deep learning has led to the develo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
183
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 155 publications
(183 citation statements)
references
References 50 publications
0
183
0
Order By: Relevance
“…In this study, we apply similar ideas to determine the chromatin folding coordinate by analyzing an ensemble of structures obtained from single-cell super-resolution imaging (30). Specifically, we used the deep learning framework VAE to derive a deep generative model (49)(50)(51). Compared to existing approaches, the generative model not only compresses the data into a lowdimensional space for reaction coordinate analysis, but also provides an estimation of the probability for each configuration.…”
Section: Resultsmentioning
confidence: 99%
“…In this study, we apply similar ideas to determine the chromatin folding coordinate by analyzing an ensemble of structures obtained from single-cell super-resolution imaging (30). Specifically, we used the deep learning framework VAE to derive a deep generative model (49)(50)(51). Compared to existing approaches, the generative model not only compresses the data into a lowdimensional space for reaction coordinate analysis, but also provides an estimation of the probability for each configuration.…”
Section: Resultsmentioning
confidence: 99%
“…The dimensionality reduction process can be conducted using an autoencoder, a self-supervised deep learning model, which can be trained to reconstruct a dateset after encoding it into a reduced latent space [20]. Various autoencoders have been developed and successfully constructed lowdimensional underlying representations of the 3D protein conformations [7]- [9]. In this study, an application of a convolutional variational autoencoder (CVAE) was adapted to automatically reduce the high dimensional conformations from MD simulations into scattering points in 3D latent space where those points are also grouped according to shared structural and energetic characteristics [7], [8], [21]- [24].…”
Section: B Dimensionality Reductionmentioning
confidence: 99%
“…One of the popular approaches to evaluate the embedding models and analyze the underlying features is to visualize them. The interactive visualization of these embeddings allows one to not only verify dimensional reduction methods (i.e., how models accurately capture certain similarity across groups of simulation frames), but also potentially interpret biomolecular mechanisms that lead to specific observations across MD simulations [7]- [9]. However, most existing visualizations for embeddings have some limitations in evaluating the embedding models and understanding the complex MD simulations.…”
Section: Introductionmentioning
confidence: 99%
“…This principle implies that a model with longer characteristic timescales is closer to representing the true processes in the data than a model with shorter characteristic timescales. This principle has been used to develop several methods for maximizing eigenvalue-based scores (12,20,21), one of these being the Generalized Matrix Rayleigh Quotient (GMRQ) (12).…”
Section: Suitability Of Timescale-optimized Modelsmentioning
confidence: 99%