2015
DOI: 10.1609/aaai.v29i1.9556
|View full text |Cite
|
Sign up to set email alerts
|

Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications

Abstract: Spectral clustering, a graph partitioning technique, has gained immense popularity in machine learning in the context of unsupervised learning. This is due to convincing empirical studies, elegant approaches involved and the theoretical guarantees provided in the literature. To tackle some challenging problems that arose in computer vision etc., recently, a need to develop spectral methods that incorporate multi-way similarity measures surfaced. This, in turn, leads to a hypergraph partitioning problem. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…Cutting one hyperedge will produce three singletons which we consider as three partitions. A similar definition of normalized associativity can be seen in literature [20,40].…”
Section: Partition Cost Is Given Bymentioning
confidence: 81%
“…Cutting one hyperedge will produce three singletons which we consider as three partitions. A similar definition of normalized associativity can be seen in literature [20,40].…”
Section: Partition Cost Is Given Bymentioning
confidence: 81%
“…In this section, we compare HLloyd + HSC with two other classic tensor clustering algorithms: HOSVD + Kmeans: apply high‐order SVD (HOSVD) on 𝒴 (De Lathauwer et al, 2000a; Ghoshdastidar & Dukkipati, 2015), then perform k$$ k $$‐means on the outcome factors of HOSVD; CP + Kmeans: apply CANDECOMP/PARAFAC (CP) decomposition on 𝒴 (Carroll & Chang, 1970), then perform k$$ k $$‐means on the outcome factors of CP decomposition. …”
Section: Numerical Studiesmentioning
confidence: 99%
“…HOSVD + Kmeans: apply high‐order SVD (HOSVD) on 𝒴 (De Lathauwer et al, 2000a; Ghoshdastidar & Dukkipati, 2015), then perform k$$ k $$‐means on the outcome factors of HOSVD;…”
Section: Numerical Studiesmentioning
confidence: 99%
“…SC relies on an affinity matrix to characterize pairwise similarities, which fall short in providing satisfactory clustering performance for such complex data [8], [9]. Converging evidence [10], [11] suggests that dealing with highdimensional and noisy data requires characterizing more complex similarities, and thereafter tensor spectral clustering (TSC) [12] has been developed recently as a promising solution. TSC employs high-order affinity tensors to characterize multi-wise similarities among samples instead of merely pairwise similarities as in previous methods.…”
Section: Introductionmentioning
confidence: 99%