Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1240
|View full text |Cite
|
Sign up to set email alerts
|

On the Automatic Generation of Medical Imaging Reports

Abstract: Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
329
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 328 publications
(375 citation statements)
references
References 28 publications
(26 reference statements)
0
329
0
Order By: Relevance
“…Chest Radiographic Observations: The task is formulated as a multi-label classification with 14 common radiographic observations following [5] including: enlarged cardiom, cardiomegaly, lung opacity, lung lesion, edema, consolidation, pneumonia, atelectasis, pneumothorax, pleural effusion, pleural other, fracture, support devices, and no finding. Compared with previous studies using pretrained encoders based on ImageNet [6,14], pretraining with images from the same domain yields better results. We add one full-connected layer as classifier and compute the binary cross entropy (BCE) loss.…”
Section: Image Encodermentioning
confidence: 64%
See 2 more Smart Citations
“…Chest Radiographic Observations: The task is formulated as a multi-label classification with 14 common radiographic observations following [5] including: enlarged cardiom, cardiomegaly, lung opacity, lung lesion, edema, consolidation, pneumonia, atelectasis, pneumothorax, pleural effusion, pleural other, fracture, support devices, and no finding. Compared with previous studies using pretrained encoders based on ImageNet [6,14], pretraining with images from the same domain yields better results. We add one full-connected layer as classifier and compute the binary cross entropy (BCE) loss.…”
Section: Image Encodermentioning
confidence: 64%
“…Radiology Report Generation: The evaluation metrics we use are BLEU [9], METEOR [2], and ROUGE [8] scores, all of which are widely used in image captioning and machine translation tasks. We compared the proposed model with several state-of-the-art baselines: (1) a visual attention based image captioning model (Vis-Att) [13]; (2) radiology report generation models, including a hierarchical decoder with co-attention (Co-Att) [6], multimodal generative model with visual attention (MM-Att) [14], and knowledge-drive retrieval based report generation (KERP) [7]; and (3) the proposed multi-view encoder with hierarchical decoder (MvH) model, the base model with visual attentions and early fusion (MvH+AttE), MvH with late fusion fashion (MvH+AttL), and the combination of late fusion with medical concepts (MvH+AttL+MC). MvH+AttL+MC* is an oracle run based on ground-truth medical concepts and considered as the upper bound of the improvement caused by applying medical concepts.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The reports used in both cases are far more structured than their raw counterparts and so this approach cannot be directly translated to hospital data. Training on raw hospital reports, Jing et al [9] demonstrated how they can be generated by first training a multi-label CNN on the images and the Medical Text Indexer (MTI) tags identified in the original raw reports of the Openi chest x-ray dataset. However, reports can be very long and heterogeneous, and the authors do not evaluate the model's ability to determine whether visually and clinically-relevant medical concepts have been identified.…”
Section: Radiology Report Generationmentioning
confidence: 99%
“…Recently, we have seen supervised learning approaches that aim to take advantage of past radiological exams containing reports in order to either autogenerate the reports [17,9,23], or to assist in classification tasks [16,21,20,24,22]. The noise present in medical reports in addition to the presence of non-visually significant information, such as the negation of pathologies, make it difficult to learn from them directly as done in natural image captioning frameworks.…”
Section: Introductionmentioning
confidence: 99%