2017
DOI: 10.1038/s41598-017-13259-6
|View full text |Cite
|
Sign up to set email alerts
|

RIFS: a randomly restarted incremental feature selection algorithm

Abstract: The advent of big data era has imposed both running time and learning efficiency challenges for the machine learning researchers. Biomedical OMIC research is one of these big data areas and has changed the biomedical research drastically. But the high cost of data production and difficulty in participant recruitment introduce the paradigm of “large p small n” into the biomedical research. Feature selection is usually employed to reduce the high number of biomedical features, so that a stable data-independent c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9

Relationship

5
4

Authors

Journals

citations
Cited by 30 publications
(19 citation statements)
references
References 48 publications
0
19
0
Order By: Relevance
“…Three classification performance measurements, i.e., accuracy (Acc), sensitivity (Sn), and specificity (Sp), were used to evaluate how well a feature subset performed (Ye et al, 2017;Xu et al, 2018;Yokoi et al, 2018;Zhao et al, 2018). The RA children were regarded as the positive samples (P) while the matched controls were the negative samples (N).…”
Section: Performance Measurementsmentioning
confidence: 99%
See 1 more Smart Citation
“…Three classification performance measurements, i.e., accuracy (Acc), sensitivity (Sn), and specificity (Sp), were used to evaluate how well a feature subset performed (Ye et al, 2017;Xu et al, 2018;Yokoi et al, 2018;Zhao et al, 2018). The RA children were regarded as the positive samples (P) while the matched controls were the negative samples (N).…”
Section: Performance Measurementsmentioning
confidence: 99%
“…These measurements were used in various prediction models like the DNA and RNA functional elements (He et al, 2018;Feng et al, 2019). And they were calculated using the 10-fold cross-validation (10FCV) strategy as similar in Ye et al (2017) and Zhao et al (2018).…”
Section: Performance Measurementsmentioning
confidence: 99%
“…Recursive feature elimination (RFE) was a widelyused feature selection framework [45] that may use different classifiers as the feature evaluator, e.g., support vector machine (SVM-RFE) [46], decision tree (DT-RFE) and logistic regression (LR-RFE), etc. Other investigated wrappers were extremely randomized tree (ET) [47], gradient boosted decision tree (GBDT), xgboost (XGB), random forest (RF) [48], McTwo [27] and RIFS [26].…”
Section: F Two Groups Of Feature Selection Algorithmsmentioning
confidence: 99%
“…Two more popular feature selection algorithms were evaluated. T-test based feature ranking algorithm was widely used to evaluate the phenotype-association of each feature, and usually the incremental feature selection (IFS) was utilized to find the best number of top-ranked features (Gharbali, et al, 2018;Ye, et al, 2017). The Lasso algorithm evaluated the features by minimizing the L1-penalty and assigned a weight to each feature (Deshpande, et al, 2019;Kumar, et al, 2017).…”
Section: Selecting Features To Improve the Predictionsmentioning
confidence: 99%