2014
DOI: 10.1007/978-3-662-44845-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Feature Selection via Unified Trace Ratio Formulation and K-means Clustering (TRACK)

Abstract: Feature selection plays a crucial role in scientific research and practical applications. In the real world applications, labeling data is time and labor consuming. Thus, unsupervised feature selection methods are desired for many practical applications. Linear discriminant analysis (LDA) with trace ratio criterion is a supervised dimensionality reduction method that has shown good performance to improve classifications. In this paper, we first propose a unified objective to seamlessly accommodate trace ratio … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 74 publications
(22 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…Ye et al introduced a kernelized K-means algorithm, denoted by DisKmeans, where embedding to a lower dimensional subspace via linear discriminant analysis (LDA) is jointly learned with K-means cluster assignments [62]. [49] proposed to a new method to simultaneously conduct both clustering and feature embedding/selection tasks to achieve better performance. But these models suffer from having shallow and linear embedding functions, which cannot represent the non-linearity of real-world data.…”
Section: Related Workmentioning
confidence: 99%
“…Ye et al introduced a kernelized K-means algorithm, denoted by DisKmeans, where embedding to a lower dimensional subspace via linear discriminant analysis (LDA) is jointly learned with K-means cluster assignments [62]. [49] proposed to a new method to simultaneously conduct both clustering and feature embedding/selection tasks to achieve better performance. But these models suffer from having shallow and linear embedding functions, which cannot represent the non-linearity of real-world data.…”
Section: Related Workmentioning
confidence: 99%
“…To preserve the original features in the low-dimensional feature space, Wang et al [30] proposed a discriminative learning method with Trace Ratio Formulation and K-means Clustering (TRACK), which jointly unifies trace ratio LDA, K-means, and regularization feature learning into a single objective function. However, similar to LDAKM, TRACK still suffers from ''small-sample-size'' problem and high computational complexity problem.…”
Section: K-means With Discriminative Subspace Learningmentioning
confidence: 99%
“…Besides, it imposes a balance parameter to control the contributions of within scatter matrix and between scatter matrix [24]. Following the previous works [24], [30], to achieve a fair comparison, all the parameters (if any) of the compared algorithms are tuned by a ''grid-search'' strategy from }, and the best clustering results are recorded with the optimal parameters. We repeat the experiment 50 times independently and report the average results together with the variance.…”
Section: Experiments a Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…There were several ways to manage the unlabeled dataset most common method approach is making LDA unsupervised by utilizing clustering algorithms [9,[14][15][16]. With that, SelfOrganizing Map (SOM) was employed in creating the clusters or making of the class labels because it is an unsupervised learning technique to discover patterns to the dataset and it can deal with high dimensional data [17].…”
mentioning
confidence: 99%