2020
DOI: 10.48550/arxiv.2012.07723
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evolutionary learning of interpretable decision trees

Leonardo Lucio Custode,
Giovanni Iacca

Abstract: Reinforcement learning techniques achieved human-level performance in several tasks in the last decade. However, in recent years, the need for interpretability emerged: we want to be able to understand how a system works and the reasons behind its decisions. Not only we need interpretability to assess the safety of the produced systems, we also need it to extract knowledge about unknown problems. While some techniques that optimize decision trees for reinforcement learning do exist, they usually employ greedy … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…The estimator of interpretability was finally incorporated into a bi-objective GP, to evaluate the interpretability of evolving models. This estimator has also been used in another recent work [9].…”
Section: Related Workmentioning
confidence: 99%
“…The estimator of interpretability was finally incorporated into a bi-objective GP, to evaluate the interpretability of evolving models. This estimator has also been used in another recent work [9].…”
Section: Related Workmentioning
confidence: 99%