2023
DOI: 10.3390/electronics12122594
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics

Abstract: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master’s students presented with globa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…Consequently, the predictions could have harmful (adversarial) consequences for users if they could be applied in real‐life unexplained, without users’ understanding of how and why they are predicted. Research on XAI shows that introducing explanations in AI systems to illustrate their reasoning to end users can improve transparency, interpretability, understanding, satisfaction, and trust (Brdnik et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, the predictions could have harmful (adversarial) consequences for users if they could be applied in real‐life unexplained, without users’ understanding of how and why they are predicted. Research on XAI shows that introducing explanations in AI systems to illustrate their reasoning to end users can improve transparency, interpretability, understanding, satisfaction, and trust (Brdnik et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…of how and why they are predicted. Research on XAI shows that introducing explanations in AI systems to illustrate their reasoning to end users can improve transparency, interpretability, understanding, satisfaction, and trust(Brdnik et al, 2023).…”
mentioning
confidence: 99%