2020
DOI: 10.17705/2msqe.00037
|View full text |Cite
|
Sign up to set email alerts
|

Challenges of Explaining the Behavior of Black-Box AI Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 59 publications
(43 citation statements)
references
References 0 publications
1
27
0
Order By: Relevance
“…The services of AI virtual assistants are executed based on AI algorithms, but users do not fully trust the information or services provided by AI virtual assistants due to the inherent blackbox problem of AI technology. Existing studies suggest that trust is an essential driver of technology acceptance and can positively influence users' acceptance of new technologies (Kaplan and Haenlein, 2019;van Pinxteren et al, 2019;Asatiani et al, 2020). This paper confirms through empirical research that trust is significantly and positively correlated with AI virtual assistant acceptance, and that its ability to reduce users' negative emotions toward AI virtual assistants plays a key role in improving AI virtual assistant acceptance.…”
Section: Discussionsupporting
confidence: 74%
See 1 more Smart Citation
“…The services of AI virtual assistants are executed based on AI algorithms, but users do not fully trust the information or services provided by AI virtual assistants due to the inherent blackbox problem of AI technology. Existing studies suggest that trust is an essential driver of technology acceptance and can positively influence users' acceptance of new technologies (Kaplan and Haenlein, 2019;van Pinxteren et al, 2019;Asatiani et al, 2020). This paper confirms through empirical research that trust is significantly and positively correlated with AI virtual assistant acceptance, and that its ability to reduce users' negative emotions toward AI virtual assistants plays a key role in improving AI virtual assistant acceptance.…”
Section: Discussionsupporting
confidence: 74%
“…Trust is defined as the user's confidence that the AI virtual assistant can reliably deliver a service (Wirtz et al, 2018). The services of AI virtual assistants are based on artificial intelligence algorithms, but due to the inherent black box problem of artificial intelligence technology (Asatiani et al, 2020), users will not fully trust the information or services provided by artificial intelligence virtual assistants (Kaplan and Haenlein, 2019). The existing research shows that only meeting the technical and social needs of users does not truly increase their loyalty to AI virtual assistants (Hassanein and Head, 2007).…”
Section: Research Framework and Hypothesis Developmentmentioning
confidence: 99%
“…This trade-off raises questions regarding individuals' freedom of choice (Zuboff, 2015(Zuboff, , 2019. Second, some advanced algorithms operate as a black box, hiding their inner workings and decision-making processes from human decision-makers (Asatiani et al, 2020(Asatiani et al, , 2021Faraj et al, 2018). This opacity leads to a difficulty in establishing accountability, assessing the accuracy and robustness of output generated, and a lack of trust in such technologies (Goldenfein, 2019;de Laat, 2018).…”
Section: Algorithmic Decision-making In Organisationsmentioning
confidence: 99%
“…Further, many complex algorithmic models lack transparency, which makes their operating logic hard to understand (Faraj et al, 2018). Such opacity requires organisations to anticipate potential unintended effects and put various safety measures in place to prevent them (Asatiani et al, 2020(Asatiani et al, , 2021.…”
Section: Introductionmentioning
confidence: 99%
“…Another important trade-off arises in the form of accurate explanations versus comprehensible explanations; the more accurate the explanation, the more incomprehensible it will be for AI-illiterate stakeholders. Even though it may be possible to use xAI frameworks to accurately depict the model output or how the model produced the output, this description may still be incomprehensible to AIilliterate stakeholders [24], [51], [52]. While these dilemmas have been acknowledged by scholars, there is a limited understanding of how they are dealt with in practice.…”
Section: Emergence Of Xaimentioning
confidence: 99%