2023
DOI: 10.1001/jamanetworkopen.2022.55113
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Chest Radiograph Captions Based on Natural Language Processing vs Completed by Radiologists

Abstract: ImportanceArtificial intelligence (AI) can interpret abnormal signs in chest radiography (CXR) and generate captions, but a prospective study is needed to examine its practical value.ObjectiveTo prospectively compare natural language processing (NLP)-generated CXR captions and the diagnostic findings of radiologists.Design, Setting, and ParticipantsA multicenter diagnostic study was conducted. The training data set included CXR images and reports retrospectively collected from February 1, 2014, to February 28,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 34 publications
0
4
0
1
Order By: Relevance
“…GenAI aids in diagnostic accuracy, although its focus on higher value creation in health care is limited. The articles in this review reported that they used deep learning (34/161, 21.1%) [ 49 , 59 , 60 , 62 , 63 , 65 , 68 , 71 , 79 , 89 , 100 , 107 , 108 , 111 , 115 , 123 , 125 , 130 - 145 ], machine learning (9/161, 5.6%) [ 53 , 55 , 83 , 91 , 110 , 146 - 149 ], and image analysis approaches of GenAI during the assistance process (13/161, 8.1%) [ 68 , 88 , 104 , 110 , 111 , 114 , 116 , 119 , 133 , 135 , 138 , 150 , 151 ]. Knowledge access using GenAI has the potential to enable more options and flexibility in serving patients.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…GenAI aids in diagnostic accuracy, although its focus on higher value creation in health care is limited. The articles in this review reported that they used deep learning (34/161, 21.1%) [ 49 , 59 , 60 , 62 , 63 , 65 , 68 , 71 , 79 , 89 , 100 , 107 , 108 , 111 , 115 , 123 , 125 , 130 - 145 ], machine learning (9/161, 5.6%) [ 53 , 55 , 83 , 91 , 110 , 146 - 149 ], and image analysis approaches of GenAI during the assistance process (13/161, 8.1%) [ 68 , 88 , 104 , 110 , 111 , 114 , 116 , 119 , 133 , 135 , 138 , 150 , 151 ]. Knowledge access using GenAI has the potential to enable more options and flexibility in serving patients.…”
Section: Resultsmentioning
confidence: 99%
“…Deep learning (34/161, 21.1%) [ 49 , 59 , 60 , 62 , 63 , 65 , 68 , 71 , 79 , 89 , 100 , 107 , 108 , 111 , 115 , 123 , 125 , 130 - 145 ]…”
Section: Methodsunclassified
“…Additionally, an interactive radiology report diagnostic and evaluation system based on LLM can extend the detailed description of AI model detection results. For example, a study by Zhang et al ( 50 ) developed an AI captioning system based on the Transformer’s BERT model, which autonomously provides prior fields for lesion descriptions. This greatly enhances the model’s interpretability by providing textual descriptions based on lesion details and is expected to directly output diagnostic results.…”
Section: Discussion—existing Challenges and Future Perspectivementioning
confidence: 99%
“…Artificial Intelligence (AI) is increasingly recognized as an important application in radiology [57,82,101,121]. In particular, the latest advancements in the creation and adaptation of multimodal foundation models (e.g., BioViL(-T) [8,17], ELIXR [137], MAIRA [58], Med-PaLM M [128]) invite high expectations of how the use of AI may transform clinical practice through efficiency and quality gains [121]; and improved overall patient care.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we focus particularly on the combination of large language models (LLMs) with vision capabilities -as so called vision-language models (VLMs). In the context of radiology imaging, this modality combination enables tasks such as: automatically generating a radiology report from a medical image (e.g., [57,58,148]); using text queries to answer questions about a radiology image (cf. [137]); or detecting errors in a radiology report text through its comparison with the image.…”
Section: Introductionmentioning
confidence: 99%