2013
DOI: 10.1080/03610918.2012.674600
|View full text |Cite
|
Sign up to set email alerts
|

Variable Selection via Regression Trees in the Presence of Irrelevant Variables

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…25 Recursive random forests modeling is a commonly used strategy to overcome that problem. 47,48 In this work, one-fourth of the least important variables were removed from the next round of modeling.…”
Section: ■ Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…25 Recursive random forests modeling is a commonly used strategy to overcome that problem. 47,48 In this work, one-fourth of the least important variables were removed from the next round of modeling.…”
Section: ■ Discussionmentioning
confidence: 99%
“…23 Based on variable importance of predictor, a series of recursive variable selection have been proposed in the literature. 24,25 Briefly, the variable importance in recursive RF for each predictor was first calculated. Then a portion of variables, say 20%, with the smallest importance will be eliminated, and a new model will be built using the remaining variables.…”
Section: ■ Introductionmentioning
confidence: 99%
“…Various learning algorithms are available for building prediction models from data. They demonstrate different sensitivities to irrelevant variables [14,[22][23][24][25]. Some learning algorithms attempt to directly reflect each variable's relevance or automatically discard irrelevant ones based on their intrinsic characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…In the presence of many irrelevant variables, however, there is an increased likelihood of spurious node splits occurring on some irrelevant variables during the training [14]. A prediction model with a higher model complexity is more likely to contain many spurious node splits, which may degrade the prediction accuracy [22].…”
Section: Decision Tree (Dt)mentioning
confidence: 99%
“…On the other hand, Young et al [27] and Toth and Eltinge [28] try to focus on the design of samples used in the training to infer the tree instead of the own learning algorithm. Nowadays, hybrids methods have turned into new tools to be explored such as the combination of feature selection methods with regression trees [29], the use of a regression tree to select the input data of a neural network [30] or ensemble of rules obtained by regression trees [31].…”
Section: Proposed Methodology: Regression Treesmentioning
confidence: 99%