Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/163
|View full text |Cite
|
Sign up to set email alerts
|

Learning Optimal Decision Trees with MaxSAT and its Integration in AdaBoost

Abstract: Recently, several exact methods to compute decision trees have been introduced. On the one hand, these approaches can find optimal trees for various objective functions including total size, depth or accuracy on the training set and therefore. On the other hand, these methods are not yet widely used in practice and classic heuristics are often still the methods of choice. In this paper we show how the SAT model proposed by [Narodytska et.al 2018] can be lifted to a MaxSAT approach, making it much m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 12 publications
0
22
0
Order By: Relevance
“…Many recent papers directly optimize the performance metric (e.g., accuracy) with soft or hard sparsity constraints on the tree size, where sparsity is measured by the number of leaves in the tree. Three major groups of these techniques are (1) mathematical programming, including mixed integer programming (MIP) [see the works of 28,29,251,301,302,118,6] and SAT solvers [214,130] [see also the review of 47], (2) stochastic search through the space of trees [e.g., 321,114,228], and (3) customized dynamic programming algorithms that incorporate branchand-bound techniques for reducing the size of the search space [131,185,222,78].…”
Section: Sparse Logical Models: Decision Trees Decision Lists Decision Setsmentioning
confidence: 99%
“…Many recent papers directly optimize the performance metric (e.g., accuracy) with soft or hard sparsity constraints on the tree size, where sparsity is measured by the number of leaves in the tree. Three major groups of these techniques are (1) mathematical programming, including mixed integer programming (MIP) [see the works of 28,29,251,301,302,118,6] and SAT solvers [214,130] [see also the review of 47], (2) stochastic search through the space of trees [e.g., 321,114,228], and (3) customized dynamic programming algorithms that incorporate branchand-bound techniques for reducing the size of the search space [131,185,222,78].…”
Section: Sparse Logical Models: Decision Trees Decision Lists Decision Setsmentioning
confidence: 99%
“…Then, we evaluate the prediction performance between the proposed MaxSAT-BDD model and the heuristic methods, ODT and OODG (Kohavi and Li 1995). Next, we compare our MaxSAT-BDD model with an exact method for building decision trees using MaxSAT (Hu et al 2020) in terms of prediction quality, model size, and encoding size. Finally, we propose and evaluate a simple, yet efficient, scalable heuristic version of our MaxSAT-BDD model.…”
Section: Resultsmentioning
confidence: 99%
“…Compared to traditional approaches, exact methods offer guarantee of optimality, such as model size and accuracy. In this context, combinatorial optimisation methods, such as Constraint Programming (Bonfietti, Lombardi, and Milano 2015;Verhaeghe et al 2020), Mixed Integer Programming (Angelino et al 2018;Verwer and Zhang 2019;Aglin, Nijssen, and Schaus 2020), or Boolean Satisfiablility (SAT) (Bessiere, Hebrard, and O'Sullivan 2009;Narodytska et al 2018;Avellaneda 2020;Hu et al 2020;Janota and Morgado 2020;Yu et al 2020) have been successfully used to learn interpretable models. These declarative approaches are particularly interesting since they offer certain flexibility to handle additional requirements when learning a model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…number of nodes, maximum/average depth, etc. [2,68,25,32,1,7,57,24,26,70,67,58,44,30,17,5,73,35,4,78,69,10,34,72,71,36,46,11,45] 2 . Moreover, there has been work on distilling or approximating complex ML models with (soft) decision trees [21,9,8,74,76,56,75].…”
Section: Introductionmentioning
confidence: 99%