2006
DOI: 10.1007/11731139_94
|View full text |Cite
|
Sign up to set email alerts
|

Neighbor Line-Based Locally Linear Embedding

Abstract: Abstract. Locally linear embedding (Lle) is a powerful approach for mapping high-dimensional data nonlinearly to a lower-dimensional space. However, when the training examples are not densely sampled, Lle often returns invalid results. In this paper, the Nl 3 e (Neighbor Line-based Lle) approach is proposed, which generates some virtual examples with the help of neighbor line such that the Lle learning can be executed on an enriched training set. Experiments show that Nl 3 e outperforms Lle in visualization.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
(13 reference statements)
0
3
0
Order By: Relevance
“…First it introduces additional parameters during local linear surface estimating; second it ignores the statistical feature, which has been used in charting a manifold (Brand 2003). Zhan and Zhou (2006) propose a Neighbor Line-based LLE to solve the problem of not densely sampled data set by generating some virtual examples with the help of the Nearest Feature Line (NFL) (Li and Lu 1999). Certainly, the computational cost of algorithm is bigger than that of the original LLE.…”
Section: Extra Processing On the Neighborhoodmentioning
confidence: 98%
See 1 more Smart Citation
“…First it introduces additional parameters during local linear surface estimating; second it ignores the statistical feature, which has been used in charting a manifold (Brand 2003). Zhan and Zhou (2006) propose a Neighbor Line-based LLE to solve the problem of not densely sampled data set by generating some virtual examples with the help of the Nearest Feature Line (NFL) (Li and Lu 1999). Certainly, the computational cost of algorithm is bigger than that of the original LLE.…”
Section: Extra Processing On the Neighborhoodmentioning
confidence: 98%
“…8. (Varini et al 2006), Weighted distance (Pan et al 2009), Kernel relative transformation (Guihua et al 2008), A new distance (Wang et al 2006) Searching rules Outlier removal (Chang and Yeung 2006;Hadid and Pietikainen 2003;Park et al 2004;Wang 2008), Using two searching rules (Eftekhari et al 2009a,b;Yulin et al 2008), Automatic selection of the neighborhood size (Karbauskaitė et al 2007;Kouropteva et al 2002;Lingzhu et al 2009;Valencia-Aguirre et al 2009;) Extra processing Neighbor smoothing (Yin et al 2008), virtual data generating (Zhan and Zhou 2006), "Short circuit" edges pruning (Xia et al 2008 Use a transformation Use a transformation between the original and projected data (Bengio et al 2003;Saul and Rowels 2004) …”
Section: Introductionmentioning
confidence: 98%
“…To solve problem above, many early works have been carried out, such as using other distance metric [12,13,14], using other rules to searching neighborhood [15,16,17,18], or embedding extra processing on the selected neighborhood [19,20]. While the computational costs of the existing methods are much higher than that of the original LLE, the performances have not been fundamentally improved.…”
Section: Introductionmentioning
confidence: 98%