2008 IEEE Workshop on Machine Learning for Signal Processing 2008
DOI: 10.1109/mlsp.2008.4685508
|View full text |Cite
|
Sign up to set email alerts
|

Scalable semidefinite manifold learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…In [8] a comparison of MVU with other manifold learning techniques is given, showing that MVU gives the best results. In [11], a variant of MVU, the Maximum Furthest Neighbors Unfolding (MFNU) that is scalable. MFNU tries to find new coordinates for the given dataset that preserve the original local distances between the points and at the same time it tries to maximize the distance between furthest neighbors.…”
Section: Maximum Variance Unfoldingmentioning
confidence: 99%
See 2 more Smart Citations
“…In [8] a comparison of MVU with other manifold learning techniques is given, showing that MVU gives the best results. In [11], a variant of MVU, the Maximum Furthest Neighbors Unfolding (MFNU) that is scalable. MFNU tries to find new coordinates for the given dataset that preserve the original local distances between the points and at the same time it tries to maximize the distance between furthest neighbors.…”
Section: Maximum Variance Unfoldingmentioning
confidence: 99%
“…MFNU's scalability is based on the fast dual-tree algorithm [12] for computing all-nearest neighbors and on the L-BFGS optimization algorithm that has linear cost per iteration. More details can be found in [11]. Given a set of data X ∈ N ×d , where N is the number of points and d is the dimensionality.…”
Section: Maximum Variance Unfoldingmentioning
confidence: 99%
See 1 more Smart Citation
“…This problem has applications in recommender systems, where furthest neighbors can increase the diversity of recommendations [1,2]. Furthest neighbor search is also a component in some nonlinear dimensionality reduction algorithms [3], complete linkage clustering [4,5] and other clustering applications [6]. Thus, being able to quickly return furthest neighbors is a significant practical concern for many applications.…”
Section: Introductionmentioning
confidence: 99%
“…These models have the drawback that they are often very costly (squared or cubic with respect to the number of data points). Recent approaches provide scalable alternatives, sometimes at the cost of non convexity of the problem [14,15,16]. However, the kernel has to be chosen prior to training and no metric adaptation according to the given label information takes place.…”
Section: Introductionmentioning
confidence: 99%