2016
DOI: 10.4238/gmr.15038990
|View full text |Cite
|
Sign up to set email alerts
|

Locally linear embedding and neighborhood rough set-based gene selection for gene expression data classification

Abstract: ABSTRACT. Cancer subtype recognition and feature selection are important problems in the diagnosis and treatment of tumors. Here, we propose a novel gene selection approach applied to gene expression data classification. First, two classical feature reduction methods including locally linear embedding (LLE) and rough set (RS) are summarized. The advantages and disadvantages of these algorithms were analyzed and an optimized model for tumor gene selection was developed based on LLE and neighborhood RS (NRS). Bh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 24 publications
(27 reference statements)
0
4
0
Order By: Relevance
“…To further verify the classification performance of our proposed method, the eight methods were employed to evaluate the number of selected genes and the classification accuracy on the four gene expression data sets selected from Table 2 . The ARDNE algorithm was compared with the seven related state-of-the-art dimensionality reduction methods, which included: (1) The sequential forward selection algorithm (SFS) [ 51 ], (2) the sparse group lasso algorithm (SGL) [ 52 ], (3) the adaptive sparse group lasso based on conditional mutual information algorithm (ASGL-CMI) [ 53 ], (4) the Spearman’s rank correlation coefficient algorithm (SC 2 ) [ 44 ], (5) the gene selection algorithm based on fisher linear discriminant and neighborhood rough set (FLD-NRS) [ 39 ], (6) the gene selection algorithm based on locally linear embedding and neighborhood rough set algorithm (LLE-NRS) [ 40 ], and (7) the RelieF algorithm [ 41 ] combined with the NRS algorithm [ 49 ] (RelieF+NRS). The SVM classifier in the WEKA tool was used to do some simulation experiments.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…To further verify the classification performance of our proposed method, the eight methods were employed to evaluate the number of selected genes and the classification accuracy on the four gene expression data sets selected from Table 2 . The ARDNE algorithm was compared with the seven related state-of-the-art dimensionality reduction methods, which included: (1) The sequential forward selection algorithm (SFS) [ 51 ], (2) the sparse group lasso algorithm (SGL) [ 52 ], (3) the adaptive sparse group lasso based on conditional mutual information algorithm (ASGL-CMI) [ 53 ], (4) the Spearman’s rank correlation coefficient algorithm (SC 2 ) [ 44 ], (5) the gene selection algorithm based on fisher linear discriminant and neighborhood rough set (FLD-NRS) [ 39 ], (6) the gene selection algorithm based on locally linear embedding and neighborhood rough set algorithm (LLE-NRS) [ 40 ], and (7) the RelieF algorithm [ 41 ] combined with the NRS algorithm [ 49 ] (RelieF+NRS). The SVM classifier in the WEKA tool was used to do some simulation experiments.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Therefore, the time complexity of ARDNE is close to O ( mn ). So far, ARDNE appears to be more efficient than some of the existing algorithms for attribute reduction in [ 33 , 39 , 40 , 41 ] in neighborhood decision systems. Furthermore, its space complexity is O ( mn ).…”
Section: Attribute Reduction Methods Using Neighborhood Entropy Meamentioning
confidence: 99%
“…In recent years, gene selection and classification methods have been developed. A technique that combines LLE and Neighborhood Rough Set (NRS) was proposed by Sun et al [31]. The irrelevant genes were dropped by measuring the separability between each class.…”
Section: Locally Linear Embeddingmentioning
confidence: 99%
“…The results of MIFS and other 11 dimensionality reduction methods on these four data sets are compared. The 11 methods used for comparison are: the sequential forward selection algorithm (SFS), 49 the Spearman's rank correlation coefficient algorithm (SC2), 50 the sparse group Lasso algorithm (SGL), 51 the mutual information maximization algorithm (MIM), 49 the adaptive sparse group Lasso algorithm based on conditional mutual information (ASGL‐CMI), 52 the distributed ranking filter approach removing the features with information gain zero from the ranking and correlation‐based feature selection algorithm (DRFO‐CFS), 52 the neighborhood rough set‐based reduction algorithm (NRS), 32 locally linear embedding and neighborhood rough set (LLE‐NRS), 53 the Relief algorithm combined with the NRS algorithm (Relief‐NRS), 54 the gene selection algorithm based on Fisher linear discriminant and neighborhood rough set (FLD‐NRS), 18 and the fuzzy backward feature elimination algorithm (FBFE) 55 12 and 13 show, respectively, the size and SVM accuracy of the reduction subset selected by the 12 methods on the four data sets, where the meaning of the symbol “/” is the same as that in Tables 9–11.…”
Section: Experiments and Analysismentioning
confidence: 99%