2021
DOI: 10.2139/ssrn.3877426
|View full text |Cite
|
Sign up to set email alerts
|

How Should Artificial Intelligence Explain Itself? Understanding Preferences for Explanations Generated by XAI Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…In spite of the above, studies related to XAI have shown that user preferences on interpretability are context and requirements dependent, and that users may even prefer more complex explanations (Ramon et al, 2021;Fürnkranz et al, 2020). Other recent examples of user studies are Byrne (2019) and Dodge et al (2019), with the latter explicitly concluding that there is no one-size-fitsall approach to explaining, but that the usefulness of explanations depends on user profiles and expertise.…”
Section: User Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…In spite of the above, studies related to XAI have shown that user preferences on interpretability are context and requirements dependent, and that users may even prefer more complex explanations (Ramon et al, 2021;Fürnkranz et al, 2020). Other recent examples of user studies are Byrne (2019) and Dodge et al (2019), with the latter explicitly concluding that there is no one-size-fitsall approach to explaining, but that the usefulness of explanations depends on user profiles and expertise.…”
Section: User Interpretabilitymentioning
confidence: 99%
“…Versatility: NICE has the ability to optimize for multiple counterfactual properties. Research has shown that the preference for counterfactual properties is context-dependant (Ramon et al, 2021;Fürnkranz et al, 2020). An algorithm that can provide multiple counterfactual explanations with different characteristics allows for personalized explanations per user.…”
Section: Introduction 1the Need For Explainabilitymentioning
confidence: 99%
“…XAI is mainly used for deep learning models, which are black-box models. XAI is used for many applications such as multimedia computing [9], computer vision [20,29], business intelligence [36] and Twitter analysis [14]. In this research work, we focus on applications based on Twitter analytics.…”
Section: Introductionmentioning
confidence: 99%