2016 5th Brazilian Conference on Intelligent Systems (BRACIS) 2016
DOI: 10.1109/bracis.2016.018
|View full text |Cite
|
Sign up to set email alerts
|

Hyper-Parameter Tuning of a Decision Tree Induction Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
70
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 81 publications
(73 citation statements)
references
References 22 publications
2
70
0
1
Order By: Relevance
“…These are summarized in Figure and they are interconnected, so modifying the value of one parameter will impact the most suitable value of other parameters. This is the case for most machine learning models, but is especially challenging for boosted trees because they are sensitive to these hyper‐parameters with a relatively small window of optimal hyper‐parameters outside of which the model under or over fits. Throughout this work, we used Hyperopt, a Python library for automatically searching the hyper‐parameter space and making optimal choices.…”
Section: Hyper‐parameter Search Parallelizationmentioning
confidence: 99%
See 1 more Smart Citation
“…These are summarized in Figure and they are interconnected, so modifying the value of one parameter will impact the most suitable value of other parameters. This is the case for most machine learning models, but is especially challenging for boosted trees because they are sensitive to these hyper‐parameters with a relatively small window of optimal hyper‐parameters outside of which the model under or over fits. Throughout this work, we used Hyperopt, a Python library for automatically searching the hyper‐parameter space and making optimal choices.…”
Section: Hyper‐parameter Search Parallelizationmentioning
confidence: 99%
“…Most importantly, for this work, boosted trees handle missing data values, as the corresponding branching is down weighted. However, a major challenge is that the model is more difficult to tune and highly sensitive to the hyper‐parameters setup …”
Section: Introductionmentioning
confidence: 99%
“…In contrast, it is a rather new insight that HPO can be used to adapt general-purpose pipelines to specific application domains [30]. Nowadays, it is also widely acknowledged that tuned hyperparameters improve over the default setting provided by common machine learning libraries [100,116,130,149].…”
mentioning
confidence: 99%
“…Furthermore, in general, the complete set of meta-features provided the best results for most algorithms. The meta-datasets for the J48 algorithm were generated based on HP tuning results obtained from 102 datasets reported in [26] 22 The J48 hyperparameter space is also presented in Appendix C. 23 In this paper, the "hyperparameter profile" term refers to how sensitive an algorithm may be to the HP tuning task. missing in the chart.…”
Section: A Note On the Generalizationmentioning
confidence: 99%