2015
DOI: 10.1016/j.patcog.2014.08.006
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised feature selection by regularized self-representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
81
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 272 publications
(81 citation statements)
references
References 14 publications
0
81
0
Order By: Relevance
“…Based on dataset labeling, FS methods can be classified as supervised [1], unsupervised [2], or semisupervised. Supervised FS methods are further categorized into filter, wrapper, and embedded methods.…”
Section: Related Workmentioning
confidence: 99%
“…Based on dataset labeling, FS methods can be classified as supervised [1], unsupervised [2], or semisupervised. Supervised FS methods are further categorized into filter, wrapper, and embedded methods.…”
Section: Related Workmentioning
confidence: 99%
“…Ant colony optimization (ACO [36]) with some modifications termed ABACO is used for feature selection [10]. Graph regularized Non Matrix Factorization (GNMF) is developed for feature selection [7]. An objective function is defined which finds a subspace such that all samples of the subspace are very far from all other samples.…”
Section: Literature Surveymentioning
confidence: 99%
“…Eliminating redundant, noisy and irrelevant features from datasets is defined as Dimensionality Reduction. Dimensionality reduction is used in face image dataset [6,7,8,9], micro array dataset and speech signals [6,7], digit images [6,7,8], letter images [8,10] for classification or clustering. Feature selection refers to selecting a subset of features from a complete set of features in a dataset.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Supervised and semi-supervised methods are usually applied on labeled data, while un-supervised method is more appropriate for unlabeled data [3]. However, many real-world applications do not contain any label, hence, the unsupervised feature selection process becoming difficult and hard to achieve [4].…”
Section: Introductionmentioning
confidence: 99%