2022
DOI: 10.1016/j.media.2022.102470
|View full text |Cite
|
Sign up to set email alerts
|

Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
153
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 454 publications
(220 citation statements)
references
References 150 publications
0
153
0
Order By: Relevance
“…Understanding how an AI-based CAD scheme or prediction model can make reliable prediction is non-trivial to most individuals because it is very difficult to explain the clinical or physical meanings of the features automatically extracted by a CNN-based deep transfer learning model. Thus, developing explainable AI models in medical image analysis has emerged as a hot research topic ( 150 ). Among these efforts, visualization tools with interactive capability or functions have been developed that aim to show the user what regions in an image or image patterns (i.e., “heat maps”) contribute the most to the decision made by AI models ( 151 , 152 ).…”
Section: Discussion – Outlook and Challengesmentioning
confidence: 99%
“…Understanding how an AI-based CAD scheme or prediction model can make reliable prediction is non-trivial to most individuals because it is very difficult to explain the clinical or physical meanings of the features automatically extracted by a CNN-based deep transfer learning model. Thus, developing explainable AI models in medical image analysis has emerged as a hot research topic ( 150 ). Among these efforts, visualization tools with interactive capability or functions have been developed that aim to show the user what regions in an image or image patterns (i.e., “heat maps”) contribute the most to the decision made by AI models ( 151 , 152 ).…”
Section: Discussion – Outlook and Challengesmentioning
confidence: 99%
“…This information can help technical industrial teams understand potential risk factors for OHPPs and identify warning signs of the early stages of musculoskeletal disorders and how to cope with work-related absenteeism [ 57 ]. In the future, we will try to use other remaining XAI techniques [ 58 , 59 , 60 ] in our model for further improvement. Different XAI techniques will help us to make our model more explainable to others.…”
Section: Discussionmentioning
confidence: 99%
“…This is one of the reasons why it is essential to develop proper mechanisms to explain the decisions of AI models. Recently, there have been various works published, such as [33][34][35][36], where the explanations are provided by the proposed frameworks in a variety of target-applications.…”
Section: Explainable Ai Frameworkmentioning
confidence: 99%