The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks 2023
DOI: 10.18653/v1/2023.bionlp-1.42
|View full text |Cite
|
Sign up to set email alerts
|

RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Crucially, machine-generated summaries must be non-inferior to that of seasoned clinicians-especially when used to support sensitive clinical decision-making. Recent work in clinical natural language processing (NLP) has demonstrated potential on medical text [66,75], adapting to the medical domain by either training a new model [59,70], fine-tuning an existing model [67,72], or supplying task-specific examples in the model prompt [46,72]. However, adapting LLMs to summarize a diverse set of clinical tasks has not been thoroughly explored, nor has non-inferiority to humans been achieved.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Crucially, machine-generated summaries must be non-inferior to that of seasoned clinicians-especially when used to support sensitive clinical decision-making. Recent work in clinical natural language processing (NLP) has demonstrated potential on medical text [66,75], adapting to the medical domain by either training a new model [59,70], fine-tuning an existing model [67,72], or supplying task-specific examples in the model prompt [46,72]. However, adapting LLMs to summarize a diverse set of clinical tasks has not been thoroughly explored, nor has non-inferiority to humans been achieved.…”
Section: Methodsmentioning
confidence: 99%
“…ICL is a lightweight adaptation method that requires no altering of model weights; instead, one includes a handful of in-context examples directly within the model prompt [39]. This simple approach provides the model with context, enhancing LLM performance for a particular task or domain [46,72]. We implemented this by choosing, for each sample in our test set, the m nearest neighbors training samples in the embedding space of the PubMedBERT model [16].…”
Section: Adaptation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Large language models (LLMs) have the potential to revolutionize our medical system[54]. They can already streamline report generation and summarization[61, 59, 8, 37], answer biomedical questions with[60, 5, 59] and without[50, 49, 38] images, and could soon effectively interpret multimodal data for precision medicine in the clinic[6]. Importantly, as humans primarily interact with the world through language, LLMs are poised to be the point of access to the multimodal medical AI solutions of the future[36].…”
Section: Mainmentioning
confidence: 99%