2016 IEEE 16th International Conference on Data Mining (ICDM) 2016
DOI: 10.1109/icdm.2016.0050
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Iterative Algorithm for Improved Unsupervised Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 29 publications
1
11
0
Order By: Relevance
“…Additionally, its performance was quite consistent over different disease types in achieving a high ROC-AUC score with a low FN rate, as shown in Figure 2 . We also benchmarked four other methods twoPhase [ 3 ]; iterFS [ 4 ]; Barabasi [ 9 ]; and LIMMA . We found that the performance of these algorithms was inconsistent, and that the performance degraded—especially when a small number of features were selected ( Figure 2 ).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, its performance was quite consistent over different disease types in achieving a high ROC-AUC score with a low FN rate, as shown in Figure 2 . We also benchmarked four other methods twoPhase [ 3 ]; iterFS [ 4 ]; Barabasi [ 9 ]; and LIMMA . We found that the performance of these algorithms was inconsistent, and that the performance degraded—especially when a small number of features were selected ( Figure 2 ).…”
Section: Resultsmentioning
confidence: 99%
“…We also benchmarked our results with two other feature selection methods, twoPhase [ 3 ], iterFS [ 4 ]; one geneset selection method Barabasi [ 9 ]; and one DE analysis method, LIMMA . For twoPhase, iterFS, and Barabasi we follow the same preprocessing as our method to select the subset of features.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To investigate its theoretical performance, Bhaskara et al (2016) studied the equivalent problem that maximizes SS + A 2 F , and proved an approximation ratio of (1 − ) w.r.t. the optimal function value Ordozgoiti et al (2016) proposed a simple local search algorithm, which starts from k randomly selected columns, and iteratively replaces one selected column with the best among n − k unselected columns, until no improvement can be yielded. This algorithm has been shown empirically to outperform other state-of-the-art methods, and an approximation bound w.r.t.…”
Section: Related Workmentioning
confidence: 99%
“…On one hand, not much is known about the hardness of approximation of column subset selection problems in general, as tight examples exist only for best rank-k subspaces. On the other, traditional combinatorial algorithms are generally much more convenient for practical implementation and exhibit good empirical performance [28,27,62]. Furthermore, recent works provide approximation guarantees for methods in this category [1,16,63].…”
Section: Heuristicsmentioning
confidence: 99%