2019
DOI: 10.1007/978-3-030-34885-4_24
|View full text |Cite
|
Sign up to set email alerts
|

Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users

Abstract: Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design phase and delayed deployment. In this paper we present an explainability framework formed of a catalogue of explanation methods, allowing integration to a range of projects within a telecommunications organisation. These methods are split into low-level explanation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 11 publications
(14 reference statements)
0
5
0
Order By: Relevance
“…However, for now there is no consensus on what it means in practice to provide contextual information for ML explanations. In the literature, the main approach consists in extracting automatically the context from the ML system or the data [2,12,19,23,27,27,29,35,37]. Indeed, context can first be directly extracted from the ML system, e.g.…”
Section: Contextualizing Explanationsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, for now there is no consensus on what it means in practice to provide contextual information for ML explanations. In the literature, the main approach consists in extracting automatically the context from the ML system or the data [2,12,19,23,27,27,29,35,37]. Indeed, context can first be directly extracted from the ML system, e.g.…”
Section: Contextualizing Explanationsmentioning
confidence: 99%
“…However, they can be relevant for non-expert users as well. Martin et al [23] show that these users are more likely to ask for concrete examples of output from the training set. Gomez et al [12] propose to contextualize local explanations by adding a visual representation of the dataset.…”
Section: Contextualizing Explanationsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, in an evolving field where the sociotechnical context and expert understanding of the issues continue to change rapidly, the knowledge required to participate will remain a moving target. With similar challenges echoed across the digital economy [102,103], there is clear demand for policy and practitioner development in this area.…”
Section: Participatory Governancementioning
confidence: 99%
“…Over the last few years, explaining opaque machine learning (ML) models has become a topic of increasing attention [2,25,8,23,37,26]. This attention arises from multiple needs of ML users, such as ensuring model trustworthiness, detecting and removing unwanted biases (fairness) and understanding causal relationships [2].…”
Section: Introductionmentioning
confidence: 99%