Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.2
|View full text |Cite
|
Sign up to set email alerts
|

Summarizing Chinese Medical Answer with Graph Convolution Networks and Question-focused Dual Attention

Abstract: Online search engines are a popular source of medical information for users, where users can enter questions and obtain relevant answers. It is desirable to generate answer summaries for online search engines, particularly summaries that can reveal direct answers to questions. Moreover, answer summaries are expected to reveal the most relevant information in response to questions; hence, the summaries should be generated with a focus on the question, which is a challenging topic-focused summarization task. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Abstractive Answer Summarization: Another line of work has attempted abstractive answer summarization by treating the tagged best answer as the gold summary of all the other answers (Chowdhury and Chakraborty, 2019;Chowdhury et al, 2020). Recent work summarizes answers to medical questions via a medical concept graph Zhang et al (2020) and incorporates multi-hop reasoning (Zhang et al, 2020) and answer relevance from a QA model into the summarization model (Su et al, 2021). Most related to our dataset creation, Chowdhury and Chakraborty (2019) present CQASumm, a dataset of about 100k automatically-created examples consisting of the best answer as the gold summary, which, however, contains noise due to automatic creation.…”
Section: Related Workmentioning
confidence: 99%
“…Abstractive Answer Summarization: Another line of work has attempted abstractive answer summarization by treating the tagged best answer as the gold summary of all the other answers (Chowdhury and Chakraborty, 2019;Chowdhury et al, 2020). Recent work summarizes answers to medical questions via a medical concept graph Zhang et al (2020) and incorporates multi-hop reasoning (Zhang et al, 2020) and answer relevance from a QA model into the summarization model (Su et al, 2021). Most related to our dataset creation, Chowdhury and Chakraborty (2019) present CQASumm, a dataset of about 100k automatically-created examples consisting of the best answer as the gold summary, which, however, contains noise due to automatic creation.…”
Section: Related Workmentioning
confidence: 99%
“…• How to help models? We could use our approach of NAL to increase the weight of the knowledge learned in the pre-training task or leverage external knowledge (Zhang et al, 2019(Zhang et al, , 2020bYu et al, 2020;Zhang et al, 2020a).…”
Section: Case Studymentioning
confidence: 99%
“…(2) Large-scale KGs are required during both fine-tuning and inference for obtaining outputs of knowledge encoders. This incurs additional computation burden that limits their usage for real-world applications that require high inference speed (Zhang et al 2020).…”
Section: Introductionmentioning
confidence: 99%