2023
DOI: 10.3390/make5010006
|View full text |Cite
|
Sign up to set email alerts
|

XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process

Abstract: Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 183 publications
0
12
0
Order By: Relevance
“…Precision medicine, a common application of AI in digital health, involves tailoring healthcare interventions to subgroups of patients by using prediction models trained on patient characteristics and contextual factors [ 29 , 30 , 31 ]. However, the reliance on AI in healthcare raises issues regarding transparency and accountability with black-box AI systems whose decision-making processes are opaque [ 32 , 33 ]. Explainable artificial intelligence emerges as a solution to enhance transparency, ensuring that AI-driven decisions are comprehensible to healthcare providers and patients alike [ 29 , 34 ].…”
Section: Resultsmentioning
confidence: 99%
“…Precision medicine, a common application of AI in digital health, involves tailoring healthcare interventions to subgroups of patients by using prediction models trained on patient characteristics and contextual factors [ 29 , 30 , 31 ]. However, the reliance on AI in healthcare raises issues regarding transparency and accountability with black-box AI systems whose decision-making processes are opaque [ 32 , 33 ]. Explainable artificial intelligence emerges as a solution to enhance transparency, ensuring that AI-driven decisions are comprehensible to healthcare providers and patients alike [ 29 , 34 ].…”
Section: Resultsmentioning
confidence: 99%
“…This study aims to conduct a comprehensive assessment in evaluating the strengths and weaknesses of this explainer. Similarly, a metareview conducted by Clement et al [36] demonstrates a high correlation between the evaluation method and the complexity of the development process. They primarily considered computational evaluation by calculating time, resources, and expenses in producing explanations without human intervention.…”
Section: Related Workmentioning
confidence: 91%
“…To clarify, it evaluates the explanation consistency in dealing with minor changes in input. Robust explanations are desired because the user expects the model to behave consistently by generating similar explanations for similar instances [3,6,36,61].…”
mentioning
confidence: 99%
“…For developers, XAI techniques can aid in system debugging and improvement by revealing insights into decision-making processes and identifying areas for enhancement [43,44]. Furthermore, Clement et al [8] present a comprehensive survey that positions various XAI methods with respect to software development principles. Researchers interested in applying XAI techniques to these application domains are encouraged to refer to these surveys [6,7,39,41,45], which provide detailed reviews of methods tailored to specific applications.…”
Section: A Brief Overview Of the Previous Attempts In Explainable Aimentioning
confidence: 99%
“…This survey paper focuses on organizing XAI approaches that explain the working of Convolutional Neural Networks (CNNs), which are state-of-the-art models for image classification. While there exist various surveys in the literature [6][7][8][9][10] with different aims and scopes, our paper aims to provide a multi-view taxonomy of XAI approaches by carefully analyzing the existing literature. The taxonomy considers the incorporation of explainability during the training phase (antehoc) and approximating the black box's working mechanism without disturbing the deployed model (posthoc).…”
Section: Introductionmentioning
confidence: 99%