2016
DOI: 10.1109/tpami.2015.2487981
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Dimensionality Reduction via Path-Based Isometric Mapping

Abstract: Nonlinear dimensionality reduction methods have demonstrated top-notch performance in many pattern recognition and image classification tasks. Despite their popularity, they suffer from highly expensive time and memory requirements, which render them inapplicable to large-scale datasets. To leverage such cases we propose a new method called "Path-Based Isomap". Similar to Isomap, we exploit geodesic paths to find the low-dimensional embedding. However, instead of preserving pairwise geodesic distances, the low… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 32 publications
0
10
0
Order By: Relevance
“…Formulated the Criterion of Evaluating. After the variable selection and training, the prediction value of the soft sensor model based on improved Elman NN as in equation (18) needs be further evaluated. In this paper, the evaluation criterion, the Bayesian information criterion (BIC), is adopted as a measure between the complexity of the soft sensor model and the accuracy.…”
Section: The Training Based On Supervisedmentioning
confidence: 99%
See 3 more Smart Citations
“…Formulated the Criterion of Evaluating. After the variable selection and training, the prediction value of the soft sensor model based on improved Elman NN as in equation (18) needs be further evaluated. In this paper, the evaluation criterion, the Bayesian information criterion (BIC), is adopted as a measure between the complexity of the soft sensor model and the accuracy.…”
Section: The Training Based On Supervisedmentioning
confidence: 99%
“…where y i is the prediction value by modified weight parameters as in equation (18). y is the actual value.…”
Section: The Training Based On Supervisedmentioning
confidence: 99%
See 2 more Smart Citations
“…Embedded methods integrate feature selection and training learners, which automatically select features during learner training [9]. Moreover, some dimension reduction methods such as principal component analysis (PCA), singular value decomposition (SVD), linear discriminant analysis (LDA), and ISOMAP algorithm can also be regarded as feature selection methods for special basic data [10][11][12][13]. However, such methods do not consider the correlation and redundancy between attributes before and after dimensionality reduction, and the results are lack of interpretability.…”
Section: Introductionmentioning
confidence: 99%