2006
DOI: 10.1109/tpami.2006.184
|View full text |Cite
|
Sign up to set email alerts
|

Diffusion maps and coarse-graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization

Abstract: We provide evidence that non-linear dimensionality reduction, clustering and data set parameterization can be solved within one and the same framework. The main idea is to define a system of coordinates with an explicit metric that reflects the connectivity of a given data set and that is robust to noise. Our construction, which is based on a Markov random walk on the data, offers a general scheme of simultaneously reorganizing and subsampling graphs and arbitrarily shaped data sets in high dimensions using in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
411
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 531 publications
(423 citation statements)
references
References 21 publications
2
411
0
Order By: Relevance
“…Diffusion Maps is originally proposed in [9][10] to improve learning performance by 'coarse-graining' structure on samples. The method of DM brings a solution in controlling noise data and it can also achieve different scales of a given data set.…”
Section: Diffusion Maps and Graph Embedding Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…Diffusion Maps is originally proposed in [9][10] to improve learning performance by 'coarse-graining' structure on samples. The method of DM brings a solution in controlling noise data and it can also achieve different scales of a given data set.…”
Section: Diffusion Maps and Graph Embedding Frameworkmentioning
confidence: 99%
“…According to the idea of diffusion maps in [9][10], the feature mapping of a certain sample x in the original feature space can be represented as the form of (2). In addition, the diffusion distance of two data points x and z is shown as (3).…”
Section: Diffusion Maps and Graph Embedding Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…For very small values of γ 2 we have P ∼ I, for which the probabilities of transition between different data points is close to zero, therefore there will not be any diffusion during one time step. The parameter γ 3 ∈ (0, ∞) is the number of time steps the random walk is going to be run or propagated, capturing information of higher order neighbourhood structure (Lafon and Lee [16]). Small values of γ 3 give us a few time steps, whereas large values of γ 3 give us a large number of time steps.…”
Section: Geometric and Graph Interpretation Of Diffuzzymentioning
confidence: 99%
“…Moreover, recent data sets analysis and machine learning methods have been developed. They are based on graph Laplacian diffusion processes and have been used to perform data sets classification [5,6], dimensionality reduction [7] or interactive image segmentation based on label diffusion methods [8,9]. If we consider an image as a set of pixels, graph Laplacian classification is difficult to use, due to the great mass of data to analyze.…”
Section: Introductionmentioning
confidence: 99%