2015
DOI: 10.1016/j.eswa.2015.08.005
|View full text |Cite
|
Sign up to set email alerts
|

Two-layer random forests model for case reuse in case-based reasoning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 24 publications
(1 citation statement)
references
References 40 publications
(26 reference statements)
0
1
0
Order By: Relevance
“…Specifically, at each internal node, the algorithm searches the values of the incoming dataset and recognizes a threshold for one predictor variable to split the dataset such that the homogeneity of dependent variable values in each branch is maximized. In the RFR, each decision tree is trained using a subset of data randomly sampled with replacement from the original training dataset, which can increase the robustness against overfitting [50]. In order to inject an additional layer of randomness, instead of using all variables, only a subset of randomly selected variables are considered to form the split nodes of each tree [51].…”
Section: Random Forest Regressionmentioning
confidence: 99%
“…Specifically, at each internal node, the algorithm searches the values of the incoming dataset and recognizes a threshold for one predictor variable to split the dataset such that the homogeneity of dependent variable values in each branch is maximized. In the RFR, each decision tree is trained using a subset of data randomly sampled with replacement from the original training dataset, which can increase the robustness against overfitting [50]. In order to inject an additional layer of randomness, instead of using all variables, only a subset of randomly selected variables are considered to form the split nodes of each tree [51].…”
Section: Random Forest Regressionmentioning
confidence: 99%