2022
DOI: 10.3389/fnins.2022.906290
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI: A review of applications to neuroimaging data

Abstract: Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 151 publications
0
4
0
Order By: Relevance
“…Last, the use of AI for imaging and analysis in gliomas is an extensive topic outside the scope of this guide and has been discussed in dedicated publications [209]. Our guide therefore only provides a snapshot of the current state-of-the-art of a continuously and rapidly evolving field.…”
Section: Discussionmentioning
confidence: 99%
“…Last, the use of AI for imaging and analysis in gliomas is an extensive topic outside the scope of this guide and has been discussed in dedicated publications [209]. Our guide therefore only provides a snapshot of the current state-of-the-art of a continuously and rapidly evolving field.…”
Section: Discussionmentioning
confidence: 99%
“…Numerous research efforts have delved into the application of XAI within medical imaging contexts, encompassing areas such as chest X-rays [20], CT scans [21], and MRI scans [22]. These studies have employed a range of XAI techniques, including but not limited to saliency maps, attribution maps, and decision trees.…”
Section: Related Workmentioning
confidence: 99%
“…Stability examines how slight perturbations in the input affect the explanation provided by XAI techniques [107]. For example, Douglas and Farahani examined the stability of XAI performance for neuroimaging [108].…”
Section: Stabilitymentioning
confidence: 99%