2018
DOI: 10.1007/s11634-018-0334-1
|View full text |Cite
|
Sign up to set email alerts
|

A two-stage sparse logistic regression for optimal gene selection in high-dimensional microarray data classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(26 citation statements)
references
References 50 publications
0
26
0
Order By: Relevance
“…Filtering methods, which reduce dimensionality and try to retain the most promising features as possible, have long been under development. A number of filtering methods has been proposed to rank features, such as Information gain [13], Markov blanket [14], Bayesian variable selection [15], Boruta [16], Fisher score [17], Relief [18], maximum relevance and minimum redundancy (MRMR) [19], marginal maximum likelihood score (MMLs) [20], among which MMLS is one of the simplest and computationally efficient methods of feature selection with some criteria.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Filtering methods, which reduce dimensionality and try to retain the most promising features as possible, have long been under development. A number of filtering methods has been proposed to rank features, such as Information gain [13], Markov blanket [14], Bayesian variable selection [15], Boruta [16], Fisher score [17], Relief [18], maximum relevance and minimum redundancy (MRMR) [19], marginal maximum likelihood score (MMLs) [20], among which MMLS is one of the simplest and computationally efficient methods of feature selection with some criteria.…”
Section: Introductionmentioning
confidence: 99%
“…[21] utilized the Relief statistical method to rank features. [20] gave a marginal maximum likelihood estimator as a feature ranking method and improved classification accuracy. [22] also developed a novel method to rank features and then chose the optimal subset of features.…”
Section: Introductionmentioning
confidence: 99%
“…This approach has been applied only on two binary‐class data sets, which can be considered as a limitation. Algamal and Lee [38] proposed a two‐stage sparse logistic regression for an efficient gene selection and cancer classification. Experimental results show that the suggested method significantly outperforms other existing techniques in terms of CA, AUC, and G‐mean.…”
Section: State‐of‐the‐art Techniquementioning
confidence: 99%
“…The training time and testing time taken by various algorithms are calculated in terms of seconds. It is observed from the table that, for almost all data sets, the execution time by the proposed method is much less than that of the existing [40] 91.18 90 100 BDE-XRankf [29] 82.4 75 95 8-S PMSO [33] 98.1 94.2 -IRLDA [41] 97 --GEM [25] 91.5 91.2 -AEN-CMI [37] 91.05 89.30 -SLR [38] 95.51 94.61 -DFS [59] 98 The bold fonts indicate the highest results and the name of the proposed techniques. methods.…”
Section: Runtime Analysismentioning
confidence: 99%
“…The problem of overdispersion usually occurs in count data. Unlike Poisson regression model, negative binomial regression can handle the overdispersion issue [5,6,31].…”
Section: Introductionmentioning
confidence: 99%