2019
DOI: 10.1007/s10994-019-05856-5
|View full text |Cite
|
Sign up to set email alerts
|

On cognitive preferences and the plausibility of rule-based models

Abstract: It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 60 publications
(35 citation statements)
references
References 143 publications
(158 reference statements)
1
32
0
Order By: Relevance
“…Freitas (2014) examines the pros and cons of decision trees, classification rules, decision tables, nearest neighbors, and Bayesian network classifiers with respect to their interpretability, and discusses how to improve the comprehensibility of classification models in general. More recently, Fürnkranz et al (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al (2016) undertake a similar analysis in the case of classification trees.…”
Section: Alternative Paths To Understandingmentioning
confidence: 99%
“…Freitas (2014) examines the pros and cons of decision trees, classification rules, decision tables, nearest neighbors, and Bayesian network classifiers with respect to their interpretability, and discusses how to improve the comprehensibility of classification models in general. More recently, Fürnkranz et al (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al (2016) undertake a similar analysis in the case of classification trees.…”
Section: Alternative Paths To Understandingmentioning
confidence: 99%
“…Therefore, the common assumption in the data mining and machine learning literature that users always find simpler models easier to understand and more convincing to believe is not necessarily well-founded. For a similar conclusion that questions the predominant "simplicity bias" when the plausibility of rule-based models is at issue (intended as the likeliness that a user accept the model as an explanation for a prediction), see also Fürnkranz et al (2019). Second, the results of the Anemia scenario suggest that the perceived plausibility of a list of causal explanations does not depend on their probability given the available evidence as much as on their being supported by the available evidence.…”
Section: Anemia Scenariomentioning
confidence: 97%
“…Therefore, the common assumption in the data mining and machine learning literature that users always find simpler models easier to understand and more convincing to believe is not necessarily well-founded. For a similar conclusion that questions the predominant "simplicity bias" when the plausibility of rule-based models is at issue (intended as the likeliness that a user accept the model as an explanation for a prediction), see also Fürnkranz et al (2019).…”
Section: Anemia Scenariomentioning
confidence: 99%