2018
DOI: 10.48550/arxiv.1804.02969
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models

Tomáš Kliegr,
Štěpán Bahník,
Johannes Fürnkranz

Abstract: While the interpretability of machine learning models is often equated with their mere syntactic comprehensibility, we think that interpretability goes beyond that, and that human interpretability should also be investigated from the point of view of cognitive science. In particular, the goal of this paper is to discuss to what extent cognitive biases may affect human understanding of interpretable machine learning models, in particular of logical rules discovered from data. Twenty cognitive biases are covered… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 124 publications
0
10
0
Order By: Relevance
“…We summarized some of these hypotheses, such as the conjunctive fallacy, and started to investigate to what extent these can serve as explanations for human preferences between different learned hypotheses. There are numerous other cognitive effects that can demonstrate how people assess rule plausibility, some of which are briefly listed in Appendix 11 and discussed more extensively in Kliegr et al (2018). Clearly, more work along these lines is needed.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We summarized some of these hypotheses, such as the conjunctive fallacy, and started to investigate to what extent these can serve as explanations for human preferences between different learned hypotheses. There are numerous other cognitive effects that can demonstrate how people assess rule plausibility, some of which are briefly listed in Appendix 11 and discussed more extensively in Kliegr et al (2018). Clearly, more work along these lines is needed.…”
Section: Discussionmentioning
confidence: 99%
“…However, we neither claim completeness, nor can we provide more than a very short summary of each phenomenon. A more extensive treatment can be found in (Kliegr et al, 2018). An extensive treatment of the subject can be found in (Kliegr et al, 2018).…”
Section: Appendix -A Brief Overview Of Relevant Cognitive Heuristics ...mentioning
confidence: 99%
See 1 more Smart Citation
“…Another important aspect of this empirical line of research is the study of cognitive biases in the understanding of interpretable models. Kliegr et al (2018) study the possible effects of biases on symbolic machine learning models. 16 As noted in the Introduction, none of these methods is intrinsically interpretable.…”
Section: Alternative Paths To Understandingmentioning
confidence: 99%
“…Recently, (Fürnkranz et al, 2020) evaluated a selection of cognitive biases in the very specific context of whether minimizing the complexity or length of a rule will also lead to increased interpretability of machine learning models. Kleiger et al (Kliegr et al, 2018) review twenty different cognitive biases that can distort interpretation of inductively learned rules in ML models. This work analyses effect of cognitive biases on human understanding of symbolic ML models and associated debiasing techniques.…”
Section: Related Workmentioning
confidence: 99%