Datasets produced in modern research, such as biomedical science, pose a number of challenges for machine learning techniques used in binary classification due to high dimensionality. Feature selection is one of the most important statistical techniques used for dimensionality reduction of the datasets. Therefore, techniques are needed to find an optimal number of features to obtain more desirable learning performance. In the machine learning context, gene selection is treated as a feature selection problem, the objective of which is to find a small subset of the most discriminative features for the target class. In this paper, a gene selection method is proposed that identifies the most discriminative genes in two stages. Genes that unambiguously assign the maximum number of samples to their respective classes using a greedy approach are selected in the first stage. The remaining genes are divided into a certain number of clusters. From each cluster, the most informative genes are selected via the lasso method and combined with genes selected in the first stage. The performance of the proposed method is assessed through comparison with other stateof-the-art feature selection methods using gene expression datasets. This is done by applying two classifiers i.e., random forest and support vector machine, on datasets with selected genes and training samples and calculating their classification accuracy, sensitivity, and Brier score on samples in the testing part. Boxplots based on the results and correlation matrices of the selected genes are thenceforth constructed. The results show that the proposed method outperforms the other methods. INDEX TERMS Clustering, classification, feature selection, high dimensional data, microarray gene expression data.
Ensemble methods based on k-NN models minimise the effect of outliers in a training dataset by searching groups of the k closest data points to estimate the response of an unseen observation. However, traditional k-NN based ensemble methods use the arithmetic mean of the training points' responses for estimation which has several weaknesses. Traditional k-NN based models are also adversely affected by the presence of non-informative features in the data. This paper suggests a novel ensemble procedure consisting of a class of base k-NN models each constructed on a bootstrap sample drawn from the training dataset with a random subset of features. In the k nearest neighbours determined by each k-NN model, stepwise regression is fitted to predict the test point. The final estimate of the target observation is then obtained by averaging the estimates from all the models in the ensemble. The proposed method is compared with some other state-of-the-art procedures on 16 benchmark datasets in terms of coefficient of determination (R 2 ), Pearson's product-moment correlation coefficient (r), mean square predicted error (M SP E), root mean squared error (RM SE) and mean absolute error (M AE) as performance metrics. Furthermore, boxplots of the results are also constructed. The suggested ensemble procedure has outperformed the other procedures on almost all the datasets. The efficacy of the method has also been verified by assessing the proposed method in comparison with the other methods by adding non-informative features to the datasets considered. The results reveal that the proposed method is more robust to the issue of non-informative features in the data as compared to the rest of the methods.
In this paper, a novel feature selection method called Robust Proportional Overlapping Score (RPOS), for microarray gene expression datasets has been proposed, by utilizing the robust measure of dispersion, i.e., Median Absolute Deviation (MAD). This method robustly identifies the most discriminative genes by considering the overlapping scores of the gene expression values for binary class problems. Genes with a high degree of overlap between classes are discarded and the ones that discriminate between the classes are selected. The results of the proposed method are compared with five state-of-the-art gene selection methods based on classification error, Brier score, and sensitivity, by considering eleven gene expression datasets. Classification of observations for different sets of selected genes by the proposed method is carried out by three different classifiers, i.e., random forest, k-nearest neighbors (k-NN), and support vector machine (SVM). Box-plots and stability scores of the results are also shown in this paper. The results reveal that in most of the cases the proposed method outperforms the other methods.
kNN based ensemble methods minimise the effect of outliers by identifying a set of data points in the given feature space that are nearest to an unseen observation in order to predict its response by using majority voting. The ordinary ensembles based on kNN find out the k nearest observations in a region (bounded by a sphere) based on a predefined value of k. This scenario, however, might not work in situations when the test observation follows the pattern of the closest data points with the same class that lie on a certain path not contained in the given sphere. This paper proposes a k nearest neighbour ensemble where the neighbours are determined in k steps. Starting from the first nearest observation of the test point, the algorithm identifies a single observation that is closest to the observation at the previous step. At each base learner in the ensemble, this search is extended to k steps on a random bootstrap sample with a random subset of features selected from the feature space. The final predicted class of the test point is determined by using a majority vote in the predicted classes given by all base models. This new ensemble method is applied on 17 benchmark datasets and compared with other classical methods, including kNN based models, in terms of classification accuracy, kappa and Brier score as performance metrics. Boxplots are also utilised to illustrate the difference in the results given by the proposed and other state-of-the-art methods. The proposed method outperformed the rest of the classical methods in the majority of cases. The paper gives a detailed simulation study for further assessment.
This study proposes a supervised feature selection technique for classification in high dimensional binary class problems by adding robustness in the conventional Fisher Score. The proposed method utilizes the more robust measure of location, i.e. the median and measure of dispersion known as Rousseeuw and Croux statistic (Q n ). Initially, a minimum subset of genes is identified by the greedy search approach, which is then combined with the top ranked genes obtained via the proposed Robust Fisher Score (RFish). To remove redundancy in the selected genes, Least Absolute Shrinkage and Selection Operator (LASSO) is then applied. The proposed method is validated on five publicly available datasets and is further assessed in a detailed simulation study. The results of the proposed method are compared with six well known feature selection methods based on prediction performance via Random Forest (RF), Support Vector Machine (SVM) and k Nearest Neighbour (k-NN) classifiers. The findings are presented in boxplots and barplots, which show that the proposed method (RFish) outperforms all the other methods in the majority of cases.INDEX TERMS Classification, feature selection, high dimensional gene expression datasets, Fisher Score, Rousseeuw and Croux statistic.
The current study proposes a novel technique for feature selection by inculcating robustness in the conventional Signal to noise Ratio (SNR). The proposed method utilizes the robust measures of location i.e., the "Median" as well as the measures of variation i.e., "Median absolute deviation (MAD) and Interquartile range (IQR)" in the SNR. By this way, two independent robust signal-to-noise ratios have been proposed. The proposed method selects the most informative genes/features by combining the minimum subset of genes or features obtained via the greedy search approach with top-ranked genes selected through the robust signal-to-noise ratio (RSNR). The results obtained via the proposed method are compared with wellknown gene/feature selection methods on the basis of performance metric i.e., classification error rate. A total of 5 gene expression datasets have been used in this study. Different subsets of informative genes are selected by the proposed and all the other methods included in the study, and their efficacy in terms of classification is investigated by using the classifier models such as support vector machine (SVM), Random forest (RF) and k-nearest neighbors (k-NN). The results of the analysis reveal that the proposed method (RSNR) produces minimum error rates than all the other competing feature selection methods in majority of the cases. For further assessment of the method, a detailed simulation study is also conducted.
Feature selection in high dimensional gene expression datasets not only reduces the dimension of the data, but also the execution time and computational cost of the underlying classifier. The current study introduces a novel feature selection method called weighted signal to noise ratio (WSNR) by exploiting the weights of features based on support vectors and signal to noise ratio, with an objective to identify the most informative genes in high dimensional classification problems. The combination of two state-of-the-art procedures enables the extration of the most informative genes. The corresponding weights of these procedures are then multiplied and arranged in decreasing order. Larger weight of a feature indicates its discriminatory power in classifying the tissue samples to their true classes. The current method is validated on eight gene expression datasets. Moreover, results of the proposed method (WSNR) are also compared with four well known feature selection methods. We found that the (WSNR) outperform the other competing methods on 6 out of 8 datasets. Box-plots and Bar-plots of the results of the proposed method and all the other methods are also constructed. The proposed method is further assessed on simulated data. Simulation analysis reveal that (WSNR) outperforms all the other methods included in the study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.