2020 13th International Conference on Human System Interaction (HSI) 2020
DOI: 10.1109/hsi49210.2020.9142668
|View full text |Cite
|
Sign up to set email alerts
|

Build confidence and acceptance of AI-based decision support systems - Explainable and liable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 12 publications
0
8
1
Order By: Relevance
“…This mode of presentation is therefore an explainable system versus the typical black-box decision of traditional AI models, which can improve confidence and thus acceptance in AI. 20 The results of this study did not support that the presentation of AI in its current form changed confidence levels. It might be possible that the experience level of the clinician affects confidence in AI despite an explainable system.…”
Section: Discussioncontrasting
confidence: 67%
“…This mode of presentation is therefore an explainable system versus the typical black-box decision of traditional AI models, which can improve confidence and thus acceptance in AI. 20 The results of this study did not support that the presentation of AI in its current form changed confidence levels. It might be possible that the experience level of the clinician affects confidence in AI despite an explainable system.…”
Section: Discussioncontrasting
confidence: 67%
“…Hence, to achieve better performances from their subjective perspective, participants may attribute less weight in their own responses compared to those given by the machine through a decrease in the confidence associated with their own responses, which may in turn enable them to adapt their post-decisional strategies based on their confidence evaluations. In line with this notion, recent studies showed that machines inspire overconfidence (Booth, 2017; 2020) or mistrust (Nicodeme, 2020; Lee & Rich, 2021; Seth et al, 2020) depending on the situation. Furthermore, it has been shown that prior beliefs about a task could induce under- and overconfidence (Van Marcke et al, 2022), and that confidence plays a role in shaping certain aspects of decision-making behavior such as the confirmation bias (Rollwage et al, 2020).…”
Section: Discussionmentioning
confidence: 91%
“…Before the current heyday of DL, a disinterest towards it existed for several years, mainly due to hardware limitations and lack of funding. However, these techniques were reassessed as soon as more powerful hardware became available, especially Graphics Processing Units (GPUs) [ 146 ]. These devices were initially designed to compute three-dimensional graphics in video games and proved to be good performers of parallel computing; therefore, such systems were promptly used for processing DL algorithms.…”
Section: Artificial Intelligence Applications In Plant Stress Sciencementioning
confidence: 99%