2020
DOI: 10.1609/aaai.v34i04.5998
|View full text |Cite
|
Sign up to set email alerts
|

Abstract Interpretation of Decision Tree Ensemble Classifiers

Abstract: We study the problem of formally and automatically verifying robustness properties of decision tree ensemble classifiers such as random forests and gradient boosted decision tree models. A recent stream of works showed how abstract interpretation, which is ubiquitously used in static program analysis, can be successfully deployed to formally verify (deep) neural networks. In this work we push forward this line of research by designing a general and principled abstract interpretation-based framework for the for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
37
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(39 citation statements)
references
References 12 publications
(20 reference statements)
1
37
1
Order By: Relevance
“…Other work has moved beyond individual adversarial examples and proposed methods to prove stability and robustness of additive tree ensembles. Ranzato and Zanella [19] propose a method that uses a similar prune and divide-and-conquer approach, but they do not use an SMT solver but an approach specifically tuned for the stability problem. Törnblom and Nadjm-Tehrani [24] use a technique they call equivalence class partitioning that enumerates all possible outputs of the model.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Other work has moved beyond individual adversarial examples and proposed methods to prove stability and robustness of additive tree ensembles. Ranzato and Zanella [19] propose a method that uses a similar prune and divide-and-conquer approach, but they do not use an SMT solver but an approach specifically tuned for the stability problem. Törnblom and Nadjm-Tehrani [24] use a technique they call equivalence class partitioning that enumerates all possible outputs of the model.…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, existing work largely suffers from one primary limitation: most approaches focus on solving one very specific type of question. For example, there is a long line of work that attempts to find adversarial examples using the ∞norm [12,24,19,5]. While some of these systems are capable of dealing with any p-norm, they are evaluated using the ∞-norm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work we design and experimentally evaluate an adversarial training algorithm for decision tree ensembles based on a genetic algorithm, that we called MetaSilvae 1 and aims at maximizing both accuracy and robustness of decision trees. MetaSilvae (MS) relies on an open source verification method of the robustness of ensembles of decision trees called Silva [44]. This robustness verification algorithm Silva performs an abstract interpretation-based static analysis [19,46] of a decision tree classifier which is able to abstractly compute the exact set of leaves of a decision tree which are reachable from an adversarial region.…”
mentioning
confidence: 99%
“…In particular, we compared our experimental results with the robust gradient boosted decision trees of [2,14]. Abstract interpretation [19,46] techniques have been fruitfully applied for designing precise and scalable robustness verification algorithms and adversarial training techniques for a range of ML models [10,26,36,37,43,44,[47][48][49]. In particular, to our knowledge, [36] is the only work using an abstract interpretation technique for adversarial training of ML models, notably deep neural networks.…”
mentioning
confidence: 99%