2002
DOI: 10.1017/s026988890200019x
|View full text |Cite
|
Sign up to set email alerts
|

A review of explanation methods for Bayesian networks

Abstract: One of the key factors for the acceptance of expert systems in real-world domains is the ability to explain their reasoning (Buchanan & Shortliffe, 1984; Henrion & Druzdzel, 1990). This paper describes the basic properties that characterise explanation methods and reviews the methods developed to date for explanation in Bayesian networks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
134
0
1

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 177 publications
(135 citation statements)
references
References 35 publications
0
134
0
1
Order By: Relevance
“…An overview of research on explaining Bayesian networks can be found in Lacave and Díez (2002). Typically, the explanation of Bayesian networks attempts to explain why certain modelling choices were made and why the network produces a certain result given these choices.…”
Section: Discussionmentioning
confidence: 99%
“…An overview of research on explaining Bayesian networks can be found in Lacave and Díez (2002). Typically, the explanation of Bayesian networks attempts to explain why certain modelling choices were made and why the network produces a certain result given these choices.…”
Section: Discussionmentioning
confidence: 99%
“…Many explanation methods for BNs (see e.g. [4,3]) focus on textual or visual systems. Other work on argument extraction includes that of Keppens [2], who focuses on Argument Diagrams.…”
Section: Discussionmentioning
confidence: 99%
“…Tullio et al reported that, given some basic types of explanations, end users can understand how machine learning systems operate [Tullio et al 2007], with the caveat that overcoming any preliminary faulty assumptions may be problematic. More sophisticated, though computationally expensive, explanation algorithms have been developed for general Bayesian networks [Lacave and Diez 2002]. Finally, Lim et al ] investigated the usefulness of the Whyline approach for explaining simple decision trees and found that the approach was viable for explaining this relatively understandable form of machine learning.…”
Section: Communicating With Machine Learning Systems **D275mentioning
confidence: 99%
“…As an example, Bayesian networks provide a sophisticated mechanism for providing detailed explanations of how different pieces of evidence influence the final prediction made by the algorithm [Lacave and Diez 2002], but this is computationally expensive. A current challenge for machine learning is to develop answers to Why questions of statistical machine learning algorithms that can be efficiently computed.…”
Section: Supporting Answers To the Why Questions Beyond Naïve Bayes **D1mentioning
confidence: 99%