2015
DOI: 10.1186/s12859-015-0629-6
|View full text |Cite
|
Sign up to set email alerts
|

Optimal combination of feature selection and classification via local hyperplane based learning strategy

Abstract: BackgroundClassifying cancers by gene selection is among the most important and challenging procedures in biomedicine. A major challenge is to design an effective method that eliminates irrelevant, redundant, or noisy genes from the classification, while retaining all of the highly discriminative genes.ResultsWe propose a gene selection method, called local hyperplane-based discriminant analysis (LHDA). LHDA adopts two central ideas. First, it uses a local approximation rather than global measurement; second, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…Gene expression datasets may have very high dimensionality. High-dimensional gene expression datasets always result in high computational load and degradation of model performance of classification algorithm [ 10 , 19 , 21 ]. Therefore, feature selection approach or dimensionality reduction methods should eliminate redundant and irrelevant feature data to decrease the ratio of features to samples.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Gene expression datasets may have very high dimensionality. High-dimensional gene expression datasets always result in high computational load and degradation of model performance of classification algorithm [ 10 , 19 , 21 ]. Therefore, feature selection approach or dimensionality reduction methods should eliminate redundant and irrelevant feature data to decrease the ratio of features to samples.…”
Section: Methodsmentioning
confidence: 99%
“…Another issue is that the dimension of the feature space may be too high for classification algorithm. Numerous algorithms may become invalid or infeasible when lots of feature data in a dimension have to be processed [ 19 21 ]. A successful method is to extract a small number of discriminative information from a high-dimensional space.…”
Section: Introductionmentioning
confidence: 99%
“…The parallelization of weighted voting combined prediction method is to divide the data set into N data blocks [3]. The data blocks of each processing node train the corresponding weighted voting combined prediction model, and the training results of each part are summarized as the training set of the aggregate prediction model to train the model [4].…”
Section: Data Partitioningmap() and Reduce()mentioning
confidence: 99%
“…On the one hand, CXR imaging is inexpensive, swift and universal but unable to detect acute pneumonia in previous studies [4,6,28]. On the other hand, CT imaging is now clinically adoptable as the principal way to confirm positive or suspected-positive COVID-19 cases [11].…”
Section: Introductionmentioning
confidence: 99%