Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.176
|View full text |Cite
|
Sign up to set email alerts
|

Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays

Abstract: Automatic medical image report generation has drawn growing attention due to its potential to alleviate radiologists' workload. Existing work on report generation often trains encoder-decoder networks to generate complete reports. However, such models are affected by data bias (e.g. label imbalance) and face common issues inherent in text generation models (e.g. repetition). In this work, we focus on reporting abnormal findings on radiology images; instead of training on complete radiology reports, we propose … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Recent studies employed reinforcement learning to generate multiple summaries with varying lengths for a given text (Hyun et al. 2022) and to optimize factual consistency of generated summaries (Roit et al. 2023).…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies employed reinforcement learning to generate multiple summaries with varying lengths for a given text (Hyun et al. 2022) and to optimize factual consistency of generated summaries (Roit et al. 2023).…”
Section: Related Workmentioning
confidence: 99%
“…Chest X-ray Report Generation Inspired by the success of deep learning models on image captioning, a lot of encoder-decoder based frameworks have been proposed (Jing et al, 2018(Jing et al, , 2019Liu et al, 2021bLiu et al, ,a, 2019cYuan et al, 2019;Xue et al, 2018;Li et al, 2018Zhang et al, 2020a;Kurisinkel et al, 2021;Ni et al, 2020;Nishino et al, 2020;Chen et al, 2020c;Wang et al, 2021;Boag et al, 2019;Syeda-Mahmood et al, 2020;Yang et al, 2020;Lovelace and Mortazavi, 2020;Zhang et al, 2020b;Miura et al, 2021). Specifically, Jing et al (2018) proposed a hierarchical LSTM with the attention mechanism (Bahdanau et al, 2015b;You et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Chest X-ray Report Generation Inspired by the success of deep learning models on image captioning, a lot of encoder-decoder based frameworks have been proposed (Jing et al, 2018(Jing et al, , 2019Liu et al, 2021Liu et al, , 2019bYuan et al, 2019;Xue et al, 2018;Li et al, 2018Zhang et al, 2020a;Kurisinkel et al, 2021;Ni et al, 2020;Nishino et al, 2020;Chen et al, 2020c;Wang et al, 2021;Boag et al, 2019;Syeda-Mahmood et al, 2020;Yang et al, 2020;Lovelace and Mortazavi, 2020;Zhang et al, 2020b;Miura et al, 2021). Specifically, Jing et al (2018) proposed a hierarchical LSTM with the attention mechanism (Bahdanau et al, 2015b;You et al, 2016).…”
Section: Related Workmentioning
confidence: 99%