2016
DOI: 10.1016/j.eswa.2016.06.009
|View full text |Cite
|
Sign up to set email alerts
|

What makes classification trees comprehensible?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(21 citation statements)
references
References 14 publications
0
21
0
Order By: Relevance
“…The study also does not report whether the participants of their study had any domain knowledge relating to the used data, so that it cannot be ruled out that the obtained result was caused by lack of domain knowledge. 20 A similar study was later conducted by Piltaver et al (2016), who found a clear relationship between model complexity and interpretability in decision trees.…”
Section: Conflicting Evidencementioning
confidence: 76%
“…The study also does not report whether the participants of their study had any domain knowledge relating to the used data, so that it cannot be ruled out that the obtained result was caused by lack of domain knowledge. 20 A similar study was later conducted by Piltaver et al (2016), who found a clear relationship between model complexity and interpretability in decision trees.…”
Section: Conflicting Evidencementioning
confidence: 76%
“…More recently, Fürnkranz et al (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al (2016) undertake a similar analysis in the case of classification trees. Another important aspect of this empirical line of research is the study of cognitive biases in the understanding of interpretable models.…”
Section: Alternative Paths To Understandingmentioning
confidence: 99%
“…Users (i.e. analysts) trust models/results that explanations can be drawn from, sometimes regardless of their predictive performances [6]. Third, lack of interpretability goes against the principle of ease-of-use -an important success factor of any system design.…”
Section: Complexity-interpretability Trade-offmentioning
confidence: 99%