2020
DOI: 10.3389/frai.2020.507973
|View full text |Cite
|
Sign up to set email alerts
|

The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions

Abstract: Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(43 citation statements)
references
References 49 publications
0
36
0
Order By: Relevance
“…Indeed, some participants noted, "Some of my diagnoses might have been affected by the AI list when I could not judge whether AI was correct or not." To enhance the physicians' skill in discriminating whether the AI is correct or not, one possible solution may be to visualize the process and evidence in the generation of AI differential diagnoses (opening the black box of AI), which is called explainable AI [24,25]. Explainable AI refers to, in a nutshell, AI in which the output by the AI can be logically understood and explained by humans.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, some participants noted, "Some of my diagnoses might have been affected by the AI list when I could not judge whether AI was correct or not." To enhance the physicians' skill in discriminating whether the AI is correct or not, one possible solution may be to visualize the process and evidence in the generation of AI differential diagnoses (opening the black box of AI), which is called explainable AI [24,25]. Explainable AI refers to, in a nutshell, AI in which the output by the AI can be logically understood and explained by humans.…”
Section: Discussionmentioning
confidence: 99%
“…Some participants in the study said, “In some cases, I felt that the list by AI was wrong this was from my intuition, though.” However, as many as 56% of physician diagnoses were identical with the AI diagnoses in the AI-incorrect cases, the physicians’ discrimination skill was too far from ideal. Indeed, some participants noted, “Some of my diagnoses might have been affected by the AI list when I could not judge whether AI was correct or not.” To enhance the physicians’ skill in discriminating whether the AI is correct or not, one possible solution may be to visualize the process and evidence in the generation of AI differential diagnoses (opening the black box of AI), which is called explainable AI [ 24 , 25 ]. Explainable AI refers to, in a nutshell, AI in which the output by the AI can be logically understood and explained by humans.…”
Section: Discussionmentioning
confidence: 99%
“…Since our approach is interpretable, it could help users in the future to uncover causal-ities between data and a system's prediction. This is especially important in decision-critical areas, such as medicine [14,4,31].…”
Section: Discussionmentioning
confidence: 99%
“…Bruckert et al [212] argued that semantic and contextual information must be taken into account while generating explanations. They also argued that human interpretable explanations must shed light on logical as well as causal correlations.…”
Section: B Better User Interface/experiencementioning
confidence: 99%