2023
DOI: 10.1101/2023.05.12.23289878
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dissection of medical AI reasoning processes via physician and generative-AI collaboration

Abstract: Despite the proliferation and clinical deployment of artificial intelligence (AI)-based medical software devices, most remain black boxes that are uninterpretable to key stakeholders including patients, physicians, and even the developers of the devices. Here, we present a general model auditing framework that combines insights from medical experts with a highly expressive form of explainable AI that leverages generative models, to understand the reasoning processes of AI devices. We then apply this framework … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…This generalist approach, while versatile, can lead to inaccuracies and hallucinations, especially in highly specialized fields like medical imaging [ 22 ]. Despite its potential, the accuracy and reliability of ChatGPT responses should be carefully assessed, and its limitations in understanding medical terminology and context should be addressed [ 23 ]. In contrast, KARA-CXR, designed explicitly for medical image analysis, benefits from a more focused training regime, enabling it to discern nuanced details in medical images more effectively and reducing the likelihood of generating erroneous interpretations.…”
Section: Discussionmentioning
confidence: 99%
“…This generalist approach, while versatile, can lead to inaccuracies and hallucinations, especially in highly specialized fields like medical imaging [ 22 ]. Despite its potential, the accuracy and reliability of ChatGPT responses should be carefully assessed, and its limitations in understanding medical terminology and context should be addressed [ 23 ]. In contrast, KARA-CXR, designed explicitly for medical image analysis, benefits from a more focused training regime, enabling it to discern nuanced details in medical images more effectively and reducing the likelihood of generating erroneous interpretations.…”
Section: Discussionmentioning
confidence: 99%
“…What can be inferred from this representative conversation? Firstly, the downsides: the chatbot's 'concerns' are actually addressed in the report 4,5 (which was included as part of the prompt to the chatbot) and earlier in the dialogue. However, a future multimodal system that can interpret scientific schematics, imagery, graphs and data may be less likely to confidently make erroneous assertions.…”
Section: Editorialmentioning
confidence: 99%
“…While these explainability methods can provide information about the spatial location (the “where”) of important features, they do not typically explain higher-level features of the pixels in the highlighted region is predictive, such as texture, shape, or size (the “what”), limiting their utility in explaining possible underlying mechanisms. Recently a new line of research 25 , 26 , 27 , 28 , 29 showed how generative models can be used to transform images of one class into another, i.e., creating counterfactual images for these classes. While these methods show how an image changes when the class is changed, they cannot disentangle individual fine-grained attributes.…”
Section: Introductionmentioning
confidence: 99%