2014
DOI: 10.1016/j.knosys.2013.11.004
|View full text |Cite
|
Sign up to set email alerts
|

Supervised feature subset selection with ordinal optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…The goal of feature selection [7][8][9][10][11] is to select the smallest subset of original features which maintains some meaningful characteristics with respect to a chosen criterion. According to the possible use of output information (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The goal of feature selection [7][8][9][10][11] is to select the smallest subset of original features which maintains some meaningful characteristics with respect to a chosen criterion. According to the possible use of output information (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Among existing feature selection algorithms, supervised feature selection algorithms are commonly employed to process the data with class labels, in which there are some representatives, such as feature selection algorithm with feature selection algorithm based on mRMR [32], sparsity-inducing norms [14], feature selection algorithm based on t-test [44,45], feature subset selection algorithm with ordinal optimization [5] and feature selection algorithm based on neighborhood multi-granulation fusion [25]. For the investigation of feature selection, one of critical issues is how to select feature subset, and filters, wrappers and embedded methods have been generally recognized as the most popular methods to solve the issue [2,8].…”
Section: Introductionmentioning
confidence: 99%
“…Related studies of feature selection and data reduction have shown some promising results that the performance of prediction models with one of the data preprocessing step is better than the one without data preprocessing (Feng et al, 2014;Gunal and Edizkan, 2008;Leyva et al, 2014;Orsenigo and Vercellis, 2013;Piramuthu, 2004;Tsai, 2009;Tsai and Chang, 2013;Wang and Chiang, 2008). However, they only focus on either selecting more representative features or reducing faulty data for better classification or prediction.…”
Section: Introductionmentioning
confidence: 99%