2023
DOI: 10.1016/j.techfore.2022.122120
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(50 citation statements)
references
References 79 publications
0
24
0
Order By: Relevance
“…This often has to do with the interpretability of the input data and how the explanation is conveyed (visuals, text, etc.) (Hudon et al, 2021;Haque et al, 2023).…”
Section: Human Interpretablementioning
confidence: 99%
See 1 more Smart Citation
“…This often has to do with the interpretability of the input data and how the explanation is conveyed (visuals, text, etc.) (Hudon et al, 2021;Haque et al, 2023).…”
Section: Human Interpretablementioning
confidence: 99%
“…As human-centric tasks often use tabular or time series data, their subsequent explanations are often not concise, actionable or interpretable easily beyond the scope of a data scientist's knowledge (Karran, Demazure, Hudon, Senecal, & Léger, 2022). Recent research on explanation user design has shown that humans across healthcare, law, finance, education, and e-commerce, among others, prefer hybrid text and visual explanations (Haque et al, 2023), a format not easily provided by current post-hoc libraries. Lastly, the consistency of the explanations is not intrinsically measured; generating an explanation for the next step in the time series could vary greatly from the previous step.…”
Section: Explainers Of Today: State-of-the-art and Limitationsmentioning
confidence: 99%
“…Conversational AI agents, such as Replika and Alexa, have become context-aware such that they can adapt and engage in prolonged conversations and understand users’ emotions (Chaturvedi et al , 2023; Skjuve et al , 2021). Moreover, the current wave of advancements shifted from affective computing to generative AI, leading to the emergence of complex agents such as ChatGPT, whose self-supervised learning algorithms can create new content and discuss any topic (Dwivedi et al , 2023; Chaturvedi et al , 2023). Such agents can assume different roles, such as customer service agents, and have already caused digital disruption in the marketplace.…”
Section: Conceptual Backgroundmentioning
confidence: 99%
“…Although our data primarily focused on English-speaking countries, existing research on other conversational AI agents, such as Xiaoce in Asian countries, supports our findings. For instance, Chaturvedi et al (2023) and Shum et al (2018) discuss how Chinese and Japanese consumers rely on the conversational AI agent Xiaoice to provide emotional support and alleviate loneliness, while anecdotal evidence suggests that heavy users of this agent spend longer time interacting with it than with other fellow humans (Gaubert, 2021). Nonetheless, we recommend that further research examine how consumers from different cultures form relationships with conversational AI agents and what the role of self-related processes is in such scenarios.…”
Section: Limitations and Future Researchmentioning
confidence: 99%
“…Despite their transformative potential, the inner mechanics of AI remain obscured. Such systems often resemble a “black box”, presenting limited interpretability regarding decision-making processes. , …”
Section: Introductionmentioning
confidence: 99%