2023
DOI: 10.1101/2023.06.07.23291119
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fostering transparent medical image AI via an image-text foundation model grounded in medical literature

Abstract: Building trustworthy and transparent image-based medical AI systems requires the ability to interrogate data and models at all stages of the development pipeline: from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. Here, we present a foundation model approach, named MONET (Medical cONcept rETriever), which learns h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 46 publications
0
1
0
Order By: Relevance
“…Further, research by Moor et al and Tu et al show the accuracy of VLMs in medical visual question-answering tasks ( 5 , 24 ). In a related vein, Kim et al’s study on FMs underscore the capacity of this new class of models to generate accurate skin images annotations ( 95 ).…”
Section: Future Directions and Opportunitiesmentioning
confidence: 99%
“…Further, research by Moor et al and Tu et al show the accuracy of VLMs in medical visual question-answering tasks ( 5 , 24 ). In a related vein, Kim et al’s study on FMs underscore the capacity of this new class of models to generate accurate skin images annotations ( 95 ).…”
Section: Future Directions and Opportunitiesmentioning
confidence: 99%