2019
DOI: 10.1111/coin.12265
|View full text |Cite
|
Sign up to set email alerts
|

Forest optimization algorithm‐based feature selection using classifier ensemble

Abstract: Features selection is the process of choosing the relevant subset of features from the high-dimensional dataset to enhance the performance of the classifier. Much research has been carried out in the present world for the process of feature selection. Algorithms such as Naïve Bayes (NB), decision tree, and genetic algorithm are applied to the high-dimensional dataset to select the relevant features and also to increase the computational speed. The proposed model presents a solution for selection of features us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Forest trees are ranked because of their fitness function value. The tree with the highest rank is selected as the best tree (solution) and its age is set to zero to prevent its aging and extinction (Moorthy and Gandhi, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…Forest trees are ranked because of their fitness function value. The tree with the highest rank is selected as the best tree (solution) and its age is set to zero to prevent its aging and extinction (Moorthy and Gandhi, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…BCFA was tested on three benchmark datasets and achieved stunning results in selecting the optimal features in less time. A hybrid feature selection algorithm based on the forest optimization algorithm (FOA) and minimization of redundancy and maximum relevance (mRMR) was proposed in [48]. The results showed that applying k-NN and NB classifiers with the proposed FOA algorithm outperformed standard classifier algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…Single variable classifiers are used for feature ranking and selection, considering the potential connection with the classifier in Reference 23. The forest optimization algorithm has been used for feature selection in collaboration with data preprocessing based upon the minimum redundancy and maximum relevance (mRMR) technique to initially remove the least important features from the feature set 24 . The t‐distributed stochastic neighbor embedding (t‐SNE) 25 maps the high‐dimensional data to lower dimensions (typically 2D or 3D) space for visualization purposes.…”
Section: Related Workmentioning
confidence: 99%