2024
DOI: 10.1016/j.inffus.2024.102301
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions

Luca Longo,
Mario Brcic,
Federico Cabitza
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
5

Relationship

3
7

Authors

Journals

citations
Cited by 47 publications
(16 citation statements)
references
References 137 publications
0
6
0
Order By: Relevance
“…Machine and deep learning-based applications have been widely adopted for solving supervised AD detection with EEG data analysis [18,[23][24][25]. For example, Convolutional Neural Networks (CNNs) have been trained on functional brain connectivity features to detect AD and other neurological disorders automatically [26].…”
Section: Related Workmentioning
confidence: 99%
“…Machine and deep learning-based applications have been widely adopted for solving supervised AD detection with EEG data analysis [18,[23][24][25]. For example, Convolutional Neural Networks (CNNs) have been trained on functional brain connectivity features to detect AD and other neurological disorders automatically [26].…”
Section: Related Workmentioning
confidence: 99%
“…In yet another recent survey on XAI methods, including LIME and SHAP, for use in the detection of Alzheimer's disease [20], the authors also highlighted the limitations and open challenges with XAI methods. A number of open issues with XAI have been discussed under nine categories providing research directions in a recently published work [21].…”
Section: Introductionmentioning
confidence: 99%
“…Rudin [18] pointed out the limitations of some approaches to explainable machine learning, suggesting that interpretable models should be used instead of black box models for making high stakes decisions. Recently, XAI has entered a new phase with the provisional agreement of the AI Act aimed at explaining AI [19]. This is important because black box machine learning applications remain challenging in several domains, such as health care and finance.…”
Section: Introductionmentioning
confidence: 99%