Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis 2018
DOI: 10.18653/v1/w18-5623
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Summarize Radiology Findings

Abstract: The Impression section of a radiology report summarizes crucial radiology findings in natural language and plays a central role in communicating these findings to physicians. However, the process of generating impressions by summarizing findings is time-consuming for radiologists and prone to errors. We propose to automate the generation of radiology impressions with neural sequence-to-sequence learning. We further propose a customized neural model for this task which learns to encode the study background info… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
84
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(87 citation statements)
references
References 21 publications
0
84
0
Order By: Relevance
“…Recently, deep learning methods have also been used for automated generation of radiology reports, typically incorporating long-short-term-memory (LSTM) network models to generate the textual paragraphs [314,315,316,317], and also to identify findings in radiology reports [318,319,320].…”
Section: Content-based Image Retrievalmentioning
confidence: 99%
“…Recently, deep learning methods have also been used for automated generation of radiology reports, typically incorporating long-short-term-memory (LSTM) network models to generate the textual paragraphs [314,315,316,317], and also to identify findings in radiology reports [318,319,320].…”
Section: Content-based Image Retrievalmentioning
confidence: 99%
“…The baseline model is a pointer-generator network (See et al, 2017) with both copy and coverage mechanisms, and is trained with a coverage loss. We adopt the implementation of Zhang et al (2018). The vocabulary size is about 50,000, with uncased word embeddings pretrained from the PubMed RCT 200k training set and the abstracts from the PubMed dataset of long documents (Cohan et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…Several medical domain natural language generation tasks have been studied using machine learning models, including generating radiology reports from images (Jing et al, 2018;Vaswani et al, 2017) and summarizing clinical reports (Zhang et al, 2018;Pivovarov and Elhadad, 2015) or research literature (Cohan et al, 2018). Recently, Gulden et al (2019) studied extractive summarization on RCT descriptions.…”
Section: Medical Natural Language Generationmentioning
confidence: 99%
“…Here, we focus on summarization of clinical notes where content accuracy and completeness are more critical. The most relevant work to ours is by Zhang et al [19] where an additional section from the radiology report (background) is used to improve summarization. Extensive automated and human evaluation and analyses demonstrate the benefits of our proposed model in comparison with existing work.…”
Section: Related Workmentioning
confidence: 99%