Unsupervised Learning Algorithms 2016
DOI: 10.1007/978-3-319-24211-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Spectral Clustering and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(19 citation statements)
references
References 51 publications
(34 reference statements)
0
19
0
Order By: Relevance
“…By allowing the cluster indicator matrices (H, Z) to be continuous valued the problem is solved by eigenvalue decomposition of the graph Laplacian matrix given in Eqs. (2) and (3) [11,12,21].…”
Section: Graph Cutsmentioning
confidence: 99%
“…By allowing the cluster indicator matrices (H, Z) to be continuous valued the problem is solved by eigenvalue decomposition of the graph Laplacian matrix given in Eqs. (2) and (3) [11,12,21].…”
Section: Graph Cutsmentioning
confidence: 99%
“…Nevertheless, it is challenging to select an appropriate scaling factor σ [19]. Kernel spectral clustering (KSC) [20] and its variants [21] have also been proposed.…”
Section: Introductionmentioning
confidence: 99%
“…Clustering is mainly performed using a (weighted or unweighted) graph describing the similarities between these objects. When the graph nodes are themselves the objects of interest, the problem is known as community detection on graphs [2]; otherwise the construction of the graph adjacency matrix K is based on a kernel operator f and the similarity between items xi and xj is given by Kij = f (xi, xj), often taken under the form Kij = f xi −xj 2 or Kij = f (x T i xj) for some function f [3]. One of the prominent methods for clustering from K, known as spectral procedures [4] consists in performing a Principal Component Analysis (PCA) on the dominant eigenvectors (presumably containing all the useful information about the data) of the symmetric normalized Laplacian matrix L = D − 1 2 KD − 1 2 (with D the degree matrix) 1 .…”
Section: Introductionmentioning
confidence: 99%