2021
DOI: 10.1007/s10994-021-06094-4
|View full text |Cite
|
Sign up to set email alerts
|

One-Stage Tree: end-to-end tree builder and pruner

Abstract: Decision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutions. Inspired by the current popular neural networks, soft trees that support end-to-end training with back-propagation have attracted more and more attention. However, existing soft trees either lose the interpretab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 18 publications
(35 reference statements)
0
2
0
Order By: Relevance
“…They employ a stochastic routing based on a Bernoulli distribution and utilize non-linear transformer modules at the edges, making the resulting trees soft and oblique. Xu et al (2022) propose One-Stage Tree as a novel method for learning soft DTs, including the tree structure, while maintaining discretization during training, which results in a higher interpretability compared to existing soft DTs. However, in contrast to GradTree, the routing is instance-wise, which significantly hampers a global interpretation of the model.…”
Section: Related Workmentioning
confidence: 99%
“…They employ a stochastic routing based on a Bernoulli distribution and utilize non-linear transformer modules at the edges, making the resulting trees soft and oblique. Xu et al (2022) propose One-Stage Tree as a novel method for learning soft DTs, including the tree structure, while maintaining discretization during training, which results in a higher interpretability compared to existing soft DTs. However, in contrast to GradTree, the routing is instance-wise, which significantly hampers a global interpretation of the model.…”
Section: Related Workmentioning
confidence: 99%
“…This can be done by modeling the possibility for a node to be either a leaf node or a decision node, allowing therefore pruning during learning. Examples of these improvements include the budding tree (Irsoy et al, 2014 ), the one-stage tree (Xu et al, 2022 ) and the quadratic program of Zantedeschi et al ( 2021 ).…”
Section: Beyond Recursive Partitioningmentioning
confidence: 99%
“…In order to simplify oblique trees, Carreira-Perpinan and Tavallali [41] proposed an algorithm called sparse oblique trees, which produces a new tree from the initial oblique tree having the same or smaller structure, but new parameter values leading to a lower or unchanged misclassification error. One-Stage Tree, as a soft tree to build and prune the decision tree jointly through a bi-level optimization problem, is presented in [42]. Menze et al [43] focused on trees with task-optimal recursive partitioning.…”
Section: Related Workmentioning
confidence: 99%