Abstract:Experts rely on fraud detection and decision support systems to analyze fraud cases, a growing problem in digital retailing and banking. With the advent of Artificial Intelligence (AI) for decision support, those experts face the black-box problem and lack trust in AI predictions for fraud. Such an issue has been tackled by employing Explainable AI (XAI) to provide experts with explained AI predictions through various explanation methods. However, fraud detection studies supported by XAI lack a user-centric pe… Show more
“…The authors evaluate and refine their artifact in three consecutive design cycles. Cirqueira et al (2021) propose a design framework for an XAI-based decision support system for fraud detection within the financial services industry. They argue that existing XAI-based fraud detection studies neglect a user-centric perspective and, therefore, integrate the concept of user-centricity in their design framework.…”
Section: Explainable Artificial Intelligence In Design Science Researchmentioning
confidence: 99%
“…The current literature indicates several application domains for XAI methods with justification towards regulators and other stakeholders as one major application (Adadi and Berrada, 2018). However, existing research limits the scope of justification to the final predictions of a ML model (Chakrobartty and El-Gayar, 2021;Fernandez et al, 2022;Cirqueira et al, 2021;Zhang et al, 2020). This narrow focus leaves all prior stages within the ML pipeline-such as data collection, feature selection and model training-opaque for regulators and other stakeholders.…”
Section: Contributions To Theorymentioning
confidence: 99%
“…Current research on explainable artificial intelligence (XAI) focuses on the interpretability of opaque ML models (Arrieta et al, 2020). The primary goal is to explain the logic behind a model's predictions, which are otherwise incomprehensible for human users (see, e.g., Chakrobartty and El-Gayar, 2021;Fernandez et al, 2022;Cirqueira et al, 2021;Zhang et al, 2020). These XAI techniques mostly address predictions of already trained models; literature that addresses the explainability and justifiability of ML pre-processing, let alone feature selection, is scarce (Marcilio and Eler, 2020).…”
Nowadays, artificial intelligence (AI) systems make predictions in numerous high stakes domains, including credit-risk assessment and medical diagnostics. Consequently, AI systems increasingly affect humans, yet many state-of-the-art systems lack transparency and thus, deny the individual’s “right to explanation”. As a remedy, researchers and practitioners have developed explainable AI, which provides reasoning on how AI systems infer individual predictions. However, with recent legal initiatives demanding comprehensive explainability throughout the (development of an) AI system, we argue that the pre-processing stage has been unjustifiably neglected and should receive greater attention in current efforts to establish explainability. In this paper, we focus on introducing explainability to an integral part of the pre-processing stage: feature selection. Specifically, we build upon design science research to develop a design framework for explainable feature selection. We instantiate the design framework in a running software artifact and evaluate it in two focus group sessions. Our artifact helps organizations to persuasively justify feature selection to stakeholders and, thus, comply with upcoming AI legislation. We further provide researchers and practitioners with a design framework consisting of meta-requirements and design principles for explainable feature selection.
“…The authors evaluate and refine their artifact in three consecutive design cycles. Cirqueira et al (2021) propose a design framework for an XAI-based decision support system for fraud detection within the financial services industry. They argue that existing XAI-based fraud detection studies neglect a user-centric perspective and, therefore, integrate the concept of user-centricity in their design framework.…”
Section: Explainable Artificial Intelligence In Design Science Researchmentioning
confidence: 99%
“…The current literature indicates several application domains for XAI methods with justification towards regulators and other stakeholders as one major application (Adadi and Berrada, 2018). However, existing research limits the scope of justification to the final predictions of a ML model (Chakrobartty and El-Gayar, 2021;Fernandez et al, 2022;Cirqueira et al, 2021;Zhang et al, 2020). This narrow focus leaves all prior stages within the ML pipeline-such as data collection, feature selection and model training-opaque for regulators and other stakeholders.…”
Section: Contributions To Theorymentioning
confidence: 99%
“…Current research on explainable artificial intelligence (XAI) focuses on the interpretability of opaque ML models (Arrieta et al, 2020). The primary goal is to explain the logic behind a model's predictions, which are otherwise incomprehensible for human users (see, e.g., Chakrobartty and El-Gayar, 2021;Fernandez et al, 2022;Cirqueira et al, 2021;Zhang et al, 2020). These XAI techniques mostly address predictions of already trained models; literature that addresses the explainability and justifiability of ML pre-processing, let alone feature selection, is scarce (Marcilio and Eler, 2020).…”
Nowadays, artificial intelligence (AI) systems make predictions in numerous high stakes domains, including credit-risk assessment and medical diagnostics. Consequently, AI systems increasingly affect humans, yet many state-of-the-art systems lack transparency and thus, deny the individual’s “right to explanation”. As a remedy, researchers and practitioners have developed explainable AI, which provides reasoning on how AI systems infer individual predictions. However, with recent legal initiatives demanding comprehensive explainability throughout the (development of an) AI system, we argue that the pre-processing stage has been unjustifiably neglected and should receive greater attention in current efforts to establish explainability. In this paper, we focus on introducing explainability to an integral part of the pre-processing stage: feature selection. Specifically, we build upon design science research to develop a design framework for explainable feature selection. We instantiate the design framework in a running software artifact and evaluate it in two focus group sessions. Our artifact helps organizations to persuasively justify feature selection to stakeholders and, thus, comply with upcoming AI legislation. We further provide researchers and practitioners with a design framework consisting of meta-requirements and design principles for explainable feature selection.
“…Apart IS-related contributions such as Förster et al ( 2020) who provide a design process for user-centric XAI systems and Herm, Wanner, et al (2022b) who introduce a taxonomy to assist user-centered XAI research, we were only able to identify a handful of DSR-based contributions that focus on user-based studies for EIS (Bunde 2021;Cirqueira et al 2021;Landwehr et al 2022;Schemmer et al 2022). and Bunde (2021) provide design principles for explainable DSS limited to detecting hate speech.…”
Section: Related Workmentioning
confidence: 99%
“…Landwehr et al (2022) derive design knowledge for image-based DSS. Further, Cirqueira et al (2021) stated design principles for XAI-based systems in fraud detection and Schemmer et al (2022) propose design principles for an XAI-based DSS at real estate appraisals.…”
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.