2021 IEEE 29th International Requirements Engineering Conference Workshops (REW) 2021
DOI: 10.1109/rew53955.2021.00033
|View full text |Cite
|
Sign up to set email alerts
|

Cases for Explainable Software Systems: Characteristics and Examples

Abstract: Human explanations are often contrastive, meaning that they do not answer the indeterminate "Why?" question, but instead "Why P, rather than Q?". Automatically generating contrastive explanations is challenging because the contrastive event (Q) represents the expectation of a user in contrast to what happened. We present an approach that predicts a potential contrastive event in situations where a user asks for an explanation in the context of rule-based systems. Our approach analyzes a situation that needs to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 52 publications
(41 reference statements)
0
4
0
Order By: Relevance
“…This theory helps to elucidate the results of studies such as [54,69] belonging to Q1, wherein the effect of explanations on trust is found to be negligible 1 . A plausible assumption is that individuals tend to bypass explanations when the judgment process is straightforward and they perceive the system to work effectively [59]. This rationale finds support in the study conducted by Bansal et al [4] where participants claimed that they mostly ignored AI in easily assessable tasks (the sentiment analysis) with up to tripled ratio with respect to more hardly assessable tasks in that study (the LSAT question answering).…”
Section: Background and Related Workmentioning
confidence: 76%
“…This theory helps to elucidate the results of studies such as [54,69] belonging to Q1, wherein the effect of explanations on trust is found to be negligible 1 . A plausible assumption is that individuals tend to bypass explanations when the judgment process is straightforward and they perceive the system to work effectively [59]. This rationale finds support in the study conducted by Bansal et al [4] where participants claimed that they mostly ignored AI in easily assessable tasks (the sentiment analysis) with up to tripled ratio with respect to more hardly assessable tasks in that study (the LSAT question answering).…”
Section: Background and Related Workmentioning
confidence: 76%
“…The authors proposed a model to analyze the impacts of explainability across different quality dimensions. Sadeghi et al [16] present a taxonomy of explanation needs that classify scenarios that require explanations. The taxonomy can be used to guide the requirements elicitation for explanation capabilities of interactive intelligent systems.…”
Section: Related Workmentioning
confidence: 99%
“…Constructing taxonomies provides numerous benefits, including supporting the communication of complex concepts, revealing relationships between entities, and uncovering knowledge gaps. In a similar approach for a different domain, Sadeghi et al [3] developed a taxonomy of reasons for Explanation Needs. They primarily distinguish between four categories of situations requiring explanations: Training, Interaction, Debugging, and Validation, yet the authors focused on Interaction.…”
Section: A Explainability and User Needs In Explanationsmentioning
confidence: 99%
“…There are several approaches to generating explanations for different algorithmic paradigms. However, there has been relatively little focus in the literature on what users actually need explanations for [3]. This lack of knowledge limits our ability to effectively elicit explainability requirements and apply existing explanation generation methods.…”
Section: Introductionmentioning
confidence: 99%