2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892336
|View full text |Cite
|
Sign up to set email alerts
|

INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations

Abstract: XAI with natural language processing aims to produce human-readable explanations as evidence for AI decisionmaking, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 50 publications
(55 reference statements)
0
1
0
Order By: Relevance
“…Current explainability methods lack in terms of causality: the presentation of model's relevant modules and input data, which does not necessarily end in user's satisfaction and understanding in the context of a given task (Holzinger et al 2019). With the use of generative AI the new possibilities of generating explanations are presented and they yield promising results that have capacity to make XAI easy to understand by laypersons (Yu et al 2022).…”
Section: Xai Curriculummentioning
confidence: 99%
“…Current explainability methods lack in terms of causality: the presentation of model's relevant modules and input data, which does not necessarily end in user's satisfaction and understanding in the context of a given task (Holzinger et al 2019). With the use of generative AI the new possibilities of generating explanations are presented and they yield promising results that have capacity to make XAI easy to understand by laypersons (Yu et al 2022).…”
Section: Xai Curriculummentioning
confidence: 99%
“…Although the emerging discipline of explainable AI (XAI) has been comprehensively studied for discriminative models [49,64], much less attention has been given to generative models, in general [70]. The few existing works in the literature mostly deal with natural language [85], and software code inference [71], but none consider the problem of image restoration, such as SR.…”
Section: Trustworthinessmentioning
confidence: 99%