2021
DOI: 10.2478/cait-2021-0016
|View full text |Cite
|
Sign up to set email alerts
|

A New Noisy Random Forest Based Method for Feature Selection

Abstract: Feature selection is an essential pre-processing step in data mining. It aims at identifying the highly predictive feature subset out of a large set of candidate features. Several approaches for feature selection have been proposed in the literature. Random Forests (RF) are among the most used machine learning algorithms not just for their excellent prediction accuracy but also for their ability to select informative variables with their associated variable importance measures. Sometimes RF model over-fits on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…Feature selection is an active research filed in machine learning, as it is an important pre-processing, finding success in different real problem applications. In general, feature selection algorithms are categorized into supervised, Semi-supervised and Unsupervised feature selection [2,3,4,5,6]. Supervised feature selection methods usually come in three flavors: Filter, Wrapper and Embedded approach.…”
Section: Feature Selection Methods Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature selection is an active research filed in machine learning, as it is an important pre-processing, finding success in different real problem applications. In general, feature selection algorithms are categorized into supervised, Semi-supervised and Unsupervised feature selection [2,3,4,5,6]. Supervised feature selection methods usually come in three flavors: Filter, Wrapper and Embedded approach.…”
Section: Feature Selection Methods Classificationmentioning
confidence: 99%
“…1). As a result, learning models built with the chosen subset of features are more readable and interpretable [3,4,5,25].The main reasons behind using feature selection are: Reducing the storage capacity and execution time, preventing the curse of dimensionality problem, minimizing the over-fitting issue, resulting in improved model generalization and increasing the performance attainability [6].…”
Section: Introductionmentioning
confidence: 99%
“…V e n k a t e s h and A n u r a d h a [19] reviewed several feature selection methods and categorized these methods into three types: filter, wrapper, and embedded methods. Several methods have been used for feature selection, including filter-based [20] and noisy random forest-based method [21].…”
Section: Previous Studiesmentioning
confidence: 99%
“…Machine Learning (ML) is considered as one important way for making benefit from this data. Moreover, the rapid technological advancements and large data production need to upgrade or change conventional techniques [5]. ML is the study that aims to use the computer algorithm that developed its behaviour over time based on the experience.…”
Section: Introductionmentioning
confidence: 99%