2006
DOI: 10.1016/j.patrec.2005.11.017
|View full text |Cite
|
Sign up to set email alerts
|

Selection of the optimal parameter value for the Isomap algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
61
0
4

Year Published

2008
2008
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(68 citation statements)
references
References 12 publications
0
61
0
4
Order By: Relevance
“…These are the presence of minimal shortcuts and a smooth embedding curve. Typically, dimensionality reduction using Isomap is sensitive to κ, the number of nearest neigbhors, and there are several methods in the literature that aim to locate the optimal choice [59]. While we do not claim to have addressed the problem of sensitivity to κ, we do provide a supervision step by successively increasing the temporal weighting parameter λ to obtain an acceptable low-dimensional embedding for a for a given value of κ.…”
Section: Discussionmentioning
confidence: 99%
“…These are the presence of minimal shortcuts and a smooth embedding curve. Typically, dimensionality reduction using Isomap is sensitive to κ, the number of nearest neigbhors, and there are several methods in the literature that aim to locate the optimal choice [59]. While we do not claim to have addressed the problem of sensitivity to κ, we do provide a supervision step by successively increasing the temporal weighting parameter λ to obtain an acceptable low-dimensional embedding for a for a given value of κ.…”
Section: Discussionmentioning
confidence: 99%
“…MDS(MultiDimensional Scaling), PCA(Principal Component Analysis) [9], ISOMAP(Isometric Feature Mapping), LLE, Hessian LLE, Laplacian Eigenmap, Diffusion maps are some of the manifold learning techniques. [2,8].…”
Section: Manifold Learning Techniquesmentioning
confidence: 99%
“…To compute the geodesic distances, we need decide on the number of nearest neighbors, k. If k is too large, it would cause the short circuit edges that shortcut the true geometry of a manifold reecting the non-linear structure of data; if k is too small, it will causes the manifold to fragment into a large number of disconnected clusters. Following Samko et al (2006), we choose k by maximizing |ρ(D, Φ k,p )|, where D and Φ k,p are the matrices of the Euclidean distances between a pair of points in the original space and the feature space, respectively, and ρ(·, ·) is the linear correlation coecient. Note that Φ k,p depends on p, the dimension of the space of the embeddings.…”
mentioning
confidence: 99%
“…Note that Φ k,p depends on p, the dimension of the space of the embeddings. Samko et al (2006) argued that the data set has its intrinsic dimension, and subsequently, they showed empirically that p does not change even if k changes. Hence, we decide to rst estimate p for an arbitrary (but reasonable) choice of k and then choose the optimal k with this p.…”
mentioning
confidence: 99%