2018
DOI: 10.1016/j.acha.2015.06.008
|View full text |Cite
|
Sign up to set email alerts
|

Parsimonious representation of nonlinear dynamical systems through manifold learning: A chemotaxis case study

Abstract: Nonlinear manifold learning algorithms, such as diffusion maps, have been fruitfully applied in recent years to the analysis of large and complex data sets. However, such algorithms still encounter challenges when faced with real data. One such challenge is the existence of "repeated eigendirections," which obscures the detection of the true dimensionality of the underlying manifold and arises when several embedding coordinates parametrize the same direction in the intrinsic geometry of the data set. We propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
101
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 76 publications
(101 citation statements)
references
References 34 publications
(70 reference statements)
0
101
0
Order By: Relevance
“…DMAPS embedding of the MPC systems showing the training data using three alternative state parametrizations (a) xα*, (b) xβ*, and (c) xγ* and the first 10 steps of the control policy ( u *) in the distance metric (see text). (d) Residuals from local linear regression for each parametrization in (a)–(c) suggesting that the first, second, and seventh eigenvectors are the best parametrization of the underlying manifold (see Equation (17) and Reference ). Arguably, only the first two eigenvectors are necessary, though we observe improved prediction accuracy when including the next nonredundant eigenvector.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…DMAPS embedding of the MPC systems showing the training data using three alternative state parametrizations (a) xα*, (b) xβ*, and (c) xγ* and the first 10 steps of the control policy ( u *) in the distance metric (see text). (d) Residuals from local linear regression for each parametrization in (a)–(c) suggesting that the first, second, and seventh eigenvectors are the best parametrization of the underlying manifold (see Equation (17) and Reference ). Arguably, only the first two eigenvectors are necessary, though we observe improved prediction accuracy when including the next nonredundant eigenvector.…”
Section: Resultsmentioning
confidence: 99%
“…Thus, the intrinsic parametrization will organize the control policy space and the state variable space using as few variables as possible—subject to the limit in Equation —and hopefully make prediction of the high‐dimensional control policies “simpler.” The leading nonredundant eigenvectors can be used to parametrize the manifold of the outputs (the control policies) and therefore provide the coordinates important for predicting them . Therefore (nonredundant) eigenvectors—sometimes after a large gap in the spectrum—correspond to coordinates in the augmented state space that are unimportant for predicting the control policy and can be eliminated to provide a reduced order approximation of the control law.…”
Section: Theorymentioning
confidence: 99%
“…Eigenvectors can be also selected based on mutual information, local linear regression, etc. [30]. Regarding the number of clusters, at present the user inputs a maximal number of Figure 13.…”
Section: Discussionmentioning
confidence: 99%
“…Yet, for data which varies in density and cluster size, typically the first k eigenvectors will not identify k clusters. Instead several eigenvectors may "repeat" on the same cluster [20,30]. An additional limitation of traditional approaches is that they assume all points belong to a cluster, whereas in calcium imaging (and other biomedical imaging applications), there are pixels belonging to the clutter (background), which are not of interest, and whose proportion out of the full image plane can vary (see Fig.…”
Section: Roi Extractionmentioning
confidence: 99%
“…Then, these neighborhoods are either used directly for optimizing low dimensional embeddings (e.g., in TSNE [43] and LLE [44]), or they are used to infer a global data manifold by considering relations between them (e.g., using diffusion geometry [14,15,45,46]). In the latter case, the data manifold enables several applications, including dimensionality reduction [14,46], clustering [45,[47][48][49], imputation [15], and extracting latent data features [50][51][52].…”
Section: Multitask Manifold Learningmentioning
confidence: 99%