2021 IEEE 29th International Requirements Engineering Conference (RE) 2021
DOI: 10.1109/re51729.2021.00025
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
33
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 59 publications
(63 citation statements)
references
References 58 publications
0
33
0
Order By: Relevance
“…It needs to be clarified which explanations, under which conditions, can justify a stakeholder's belief that the system works properly. With this in mind, we suggest paying particular attention to the context in which an explanation is given, as different stakeholders and situations might require different explanations to make the system trustworthy [5], [6].…”
Section: Future Research Directionsmentioning
confidence: 99%
See 2 more Smart Citations
“…It needs to be clarified which explanations, under which conditions, can justify a stakeholder's belief that the system works properly. With this in mind, we suggest paying particular attention to the context in which an explanation is given, as different stakeholders and situations might require different explanations to make the system trustworthy [5], [6].…”
Section: Future Research Directionsmentioning
confidence: 99%
“…Many see explainability as a suitable means to foster stakeholder trust [5], [6]: If we better understand how the system produces its outputs and the explanation for a given output fits with our expectations of how a good decision should be made, this explanation presents a reason to trust the system. Thus, at first glance, a requirement for explainability seems to be more suitable than to have a requirement for trust directly.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…As it is considered to be a central enabler of many crucial desiderata associated with intelligent systems [5]- [7], the importance of explainability for overall system trustworthiness becomes even more apparent. These desiderata can take the form of goals, interests, needs, and demands of the multiple stakeholders involved in the development, deployment, and actual use of intelligent systems [7].…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, for the case of (reasonable) trust and trustworthiness, seeKästner et al (2021). Furthermore, seeChazette et al (2021) for a general model of the impact of explainability on various social and technical phenomena.…”
mentioning
confidence: 99%