2022
DOI: 10.24251/hicss.2022.186
|View full text |Cite
|
Sign up to set email alerts
|

Validation of AI-based Information Systems for Sensitive Use Cases: Using an XAI Approach in Pharmaceutical Engineering

Abstract: Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors.Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Researchers and practitioners are interested in XAI business models to help them explore data relationships, improve AI methods, justify AI decisions, and control XAI technologies while simultaneously meeting user needs (Adadi & Berrada, 2018;Meske et al, 2022;Thiebes et al, 2021). In contrast, many other scientists have focused on XAI algorithms and proposed artifacts to increase unbiased AI decision-making (Xie et al, 2022) or to understand the behavior of an AI system (Polzer et al, 2022). To benefit from such solutions, users and interested stakeholders such as managers, data scientists, and AI developers must determine which XAI solution best fits their requirements.…”
Section: Discussion and A Decision Support Frameworkmentioning
confidence: 99%
“…Researchers and practitioners are interested in XAI business models to help them explore data relationships, improve AI methods, justify AI decisions, and control XAI technologies while simultaneously meeting user needs (Adadi & Berrada, 2018;Meske et al, 2022;Thiebes et al, 2021). In contrast, many other scientists have focused on XAI algorithms and proposed artifacts to increase unbiased AI decision-making (Xie et al, 2022) or to understand the behavior of an AI system (Polzer et al, 2022). To benefit from such solutions, users and interested stakeholders such as managers, data scientists, and AI developers must determine which XAI solution best fits their requirements.…”
Section: Discussion and A Decision Support Frameworkmentioning
confidence: 99%
“…It is possible to associate biological effects with physicochemical effects and to derive accurate and appropriate models according to this relationship. Ultimately, XAI aims to reveal what is done, how it is done, and related information in drug discovery (Holzinger et al, 2022; Polzer et al, 2022). Given the importance of explainability, XAI is emerging, a collection of AI methods focused on generating outputs and recommendations that human experts can understand and interpret.…”
Section: Ai and Xai In Drug Discoverymentioning
confidence: 99%
“…Automation bias describes the tendency of people to thoughtlessly accept an automated decision or recommendation. Thus far, automation bias and its negative outcomes have primarily been investigated in aviation contexts (e.g., Mosier and Skitka, 1999;Davis et al, 2020) and medical contexts (e.g., Goddard et al, 2012;Lyell et al, 2018), but have also been found in the military domain and in process control (Bahner et al, 2008;Parasuraman and Manzey, 2010) as well as in quality control (Kloker et al, 2022). However, automation bias can occur in every work field that includes human-systeminteraction (Goddard et al, 2012).…”
Section: Human Information Processing and Automation Biasmentioning
confidence: 99%