2019
DOI: 10.1007/978-3-030-22871-2_48
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Induction of Neural Network Decision Tree Algorithms

Abstract: This work presents an approach to automatically induction for non-greedy decision trees constructed from neural network architecture. This construction can be used to transfer weights when growing or pruning a decision tree, allowing non-greedy decision tree algorithms to automatically learn and adapt to the ideal architecture. In this work, we examine the underpinning ideas within ensemble modelling and Bayesian model averaging which allow our neural network to asymptotically approach the ideal architecture t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 4 publications
(8 reference statements)
0
4
0
Order By: Relevance
“…Learning Decision Stumps as a Model Selection Problem If we interpret the decision function to be a model selection process, then model selection approaches can be used to determine the ideal model, and hence axis-parallel split for the decision function. One approach is to use a stacking model, which has been shown to asymptotically approach Bayesian model averaging and can also be used as a neural network architecture search problem for neural network decision tree algorithms [16]. We then formulate stacking every potential axis-parallel split as follows:…”
Section: Decision Stumpsmentioning
confidence: 99%
See 2 more Smart Citations
“…Learning Decision Stumps as a Model Selection Problem If we interpret the decision function to be a model selection process, then model selection approaches can be used to determine the ideal model, and hence axis-parallel split for the decision function. One approach is to use a stacking model, which has been shown to asymptotically approach Bayesian model averaging and can also be used as a neural network architecture search problem for neural network decision tree algorithms [16]. We then formulate stacking every potential axis-parallel split as follows:…”
Section: Decision Stumpsmentioning
confidence: 99%
“…TreeGrad is an extension of Deep Neural Decision Forests, which treats the node split structure to be a neural network architecture search problem in the manner described by [16]; whilst enforcing neural network compression approaches to render our decision trees to be more interpretable through creating axis-parallel splits.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, we conducted a comprehensive investigation on how ML algorithms can be exploited to cope with the challenges addressed above. We analyzed how different types of prediction models, namely the conventional decision tree (DT) 21,22 prediction models, the state-of-the-art Deep Neural Network (DNN) prediction models 15 , as well as the logistic regression (LR) based prediction models, performed in identifying the patients with high risk of rapid progression and death. With the experimental results, our discussions have focused on how DT prediction models can be incorporated to facilitate physicians' decisions on triaging patients based on risk of clinical deterioration and prioritizing scarce medical resources for best population outcomes during a pandemic of EID.…”
Section: Introductionmentioning
confidence: 99%