2021
DOI: 10.1016/j.eswa.2020.114214
|View full text |Cite
|
Sign up to set email alerts
|

Learning with continuous piecewise linear decision trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 30 publications
(34 reference statements)
0
6
0
Order By: Relevance
“…It considers all the data in the initial stage and divides it into branches. Each branch indicates a condition of input values, and the branch grows like a tree based on the relevant conditions and threshold value until the best value is found . However, as being the result of fitting to a single tree, a DT model is prone to overfitting, and achieving generalized optimal tree depth is challenging …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It considers all the data in the initial stage and divides it into branches. Each branch indicates a condition of input values, and the branch grows like a tree based on the relevant conditions and threshold value until the best value is found . However, as being the result of fitting to a single tree, a DT model is prone to overfitting, and achieving generalized optimal tree depth is challenging …”
Section: Methodsmentioning
confidence: 99%
“…Each branch indicates a condition of input values, and the branch grows like a tree based on the relevant conditions and threshold value until the best value is found. 58 However, as being the result of fitting to a single tree, a DT model is prone to overfitting, 41 and achieving generalized optimal tree depth is challenging. 59 In contrast, RF is an ensemble learning model�it learns from the ensemble of decision trees.…”
Section: Machine Learning Model Trainingmentioning
confidence: 99%
“…In the growing process of DTs, by extending the tree with multi leaf nodes and specifying a decision rule to describe the model output of the instances in the corresponding subregion, the domain is recursively divided into several subregions. A complete prediction model was formed by the linear combination of these decision rules at the leaf nodes [31].…”
Section: Model Constructionmentioning
confidence: 99%
“…However, these local linear regressions bring substantially higher computational burden. A novel PWL decision tree as a flexible and efficient alternative has been previously constructed 89 , which can be regarded as the extension of ReLU to the learning framework of decision trees.…”
Section: Tree Searching Algorithmmentioning
confidence: 99%