2013
DOI: 10.3758/s13428-013-0329-y
|View full text |Cite
|
Sign up to set email alerts
|

Subspace K-means clustering

Abstract: To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
39
1
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 57 publications
(47 citation statements)
references
References 52 publications
2
39
1
1
Order By: Relevance
“…when performances was stable) and used a k -mean clustering separation with Statistica ® software (version 12) (Timmerman et al 2013), so that animal belonged to a set that had the closest mean to its own performance value. Three groups were defined: animals which chose mostly advantageous options at the end of the experiment, thereafter called “safe” group, animals which explored the different options at the end of the experiment, thereafter called “risky”, and animals which exhibited an intermediate behavior and distributed their choices between sporadic risky choices and high proportion of advantageous choices, thereafter called “average”.…”
Section: Methodsmentioning
confidence: 99%
“…when performances was stable) and used a k -mean clustering separation with Statistica ® software (version 12) (Timmerman et al 2013), so that animal belonged to a set that had the closest mean to its own performance value. Three groups were defined: animals which chose mostly advantageous options at the end of the experiment, thereafter called “safe” group, animals which explored the different options at the end of the experiment, thereafter called “risky”, and animals which exhibited an intermediate behavior and distributed their choices between sporadic risky choices and high proportion of advantageous choices, thereafter called “average”.…”
Section: Methodsmentioning
confidence: 99%
“…1 Using PCA to discover natural products unique to group 7 (Hou et al 2012) Computational and statistical analysis of metabolomics data Soete and Carroll 1994). Many methods have been proposed to improve the original reduced K-means (De Soete and Carroll 1994), including factorial K-means (Vichi and Kiers 2001) and subspace K-means (Timmerman et al 2013). An alternative solution is using variable selection (Steinley and Brusco 2008) or variable weighting (Huang et al 2005).…”
Section: Clusteringmentioning
confidence: 99%
“…As examples, one may think of the loading matrices that result from fitting mixtures of factor analyzers (McLachlan & Peel, 2000;Yung, 1997), a subspace k-means analysis (Timmerman, Ceulemans, De Roover & Van Leeuwen, 2013), or a switching principal component analysis (De Roover, Timmerman, Van Diest, Onghena & Ceulemans, 2014b).…”
Section: Discussionmentioning
confidence: 99%