The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
27th International Conference on Intelligent User Interfaces 2022
DOI: 10.1145/3490099.3511139
|View full text |Cite
|
Sign up to set email alerts
|

Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users

Abstract: The increasing usage of complex Machine Learning models for decision-making has raised interest in explainable artificial intelligence (XAI). In this work, we focus on the effects of providing accessible and useful explanations to non-expert users. More specifically, we propose generic XAI design principles for contextualizing and allowing the exploration of explanations based on local feature importance. To evaluate the effectiveness of these principles for improving users' objective understanding and satisfa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(36 citation statements)
references
References 32 publications
0
27
0
Order By: Relevance
“…Similarly, current research in the field of HCI-based XAI investigates how users perceive user interfaces (UI) and thereby their expectations towards the use of intelligent systems (e.g., Mualla et al 2022;Stumpf et al 2019). This research aims to reveal the influence of HCI in the field of XAI research (e.g., Abdul et al 2018;Bove et al 2022). Lastly, research addresses the impact of interactive UI elements within intelligent systems (e.g., Evans et al 2022;Khanna et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, current research in the field of HCI-based XAI investigates how users perceive user interfaces (UI) and thereby their expectations towards the use of intelligent systems (e.g., Mualla et al 2022;Stumpf et al 2019). This research aims to reveal the influence of HCI in the field of XAI research (e.g., Abdul et al 2018;Bove et al 2022). Lastly, research addresses the impact of interactive UI elements within intelligent systems (e.g., Evans et al 2022;Khanna et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…XAI helps users understand the underlying structure of black-box machine learning models and how they produce their outputs; hence, boosting user's confidence in these models and encouraging them to use them. Unfortunately, most XAI that are in use produce explanations in a technical format that is not easily understandable to a non-ML expert (Bove, Aigrain, Lesot, Tijus, & Detyniecki, 2022), which in the case of power generation, most operational staff will be. Research shows that experts in the application domain tend to trust machine learning models when they are provided with human-friendly explanations that will enable them to understand the rationale of ML models (Bove et al, 2022).…”
Section: Problem Statementmentioning
confidence: 99%
“…Unfortunately, most XAI that are in use produce explanations in a technical format that is not easily understandable to a non-ML expert (Bove, Aigrain, Lesot, Tijus, & Detyniecki, 2022), which in the case of power generation, most operational staff will be. Research shows that experts in the application domain tend to trust machine learning models when they are provided with human-friendly explanations that will enable them to understand the rationale of ML models (Bove et al, 2022). Also, there is a requirement for distinctly different explanations for stakeholders in different application domains (Mohseni, Zarei, & Ragan, 2018).…”
Section: Problem Statementmentioning
confidence: 99%
“…One strategy for improving user understanding of AI systems is explainable AI (XAI). Machine learning developers have created a large number of explanation techniques for various types of models [4,12,2,45], and the effects of XAI on user understanding has been subject of several user studies in the AI literature [3,47,55,8,11]. However, despite efforts to create benchmarks for objectively evaluating XAI techniques [16,30,56,1], understanding how exactly XAI affects trust and behavior of lay users in human-AI interaction has remained a challenge [44,12,20,21].…”
Section: Related Workmentioning
confidence: 99%