2018
DOI: 10.48550/arxiv.1801.07704
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models

Abstract: Query Focused Summarization (QFS) has been addressed mostly using extractive methods. Such methods, however, produce text which suffers from low coherence. We investigate how abstractive methods can be applied to QFS, to overcome such limitations. Recent developments in neural-attention based sequence-to-sequence models have led to state-of-the-art results on the task of abstractive generic single document summarization. Such models are trained in an end to end method on large amounts of training data. We addr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(70 citation statements)
references
References 10 publications
0
66
0
Order By: Relevance
“…Multi-Document QFS To apply a summarization system trained on single-document data to a multi-document setting, we adopt a simple iterative generation approach (Baumel et al, 2018): we first rank documents in a cluster via query term frequency, and then generate summaries iteratively for each document. The final summary for the whole cluster is composed by concatenating documentlevel summaries.…”
Section: Resultsmentioning
confidence: 99%
“…Multi-Document QFS To apply a summarization system trained on single-document data to a multi-document setting, we adopt a simple iterative generation approach (Baumel et al, 2018): we first rank documents in a cluster via query term frequency, and then generate summaries iteratively for each document. The final summary for the whole cluster is composed by concatenating documentlevel summaries.…”
Section: Resultsmentioning
confidence: 99%
“…Nema et al (2017) proposed an encode-attend-decode system with an additional query attention mechanism and diversity-based attention mechanism to generate a more queryrelevant summary. Baumel et al (2018) rated query relevance into a pre-trained abstractive summarizer to make the model aware of the query, while Xu and Lapata (2020a) discovered a new type of connection between generic summaries and QFS queries, and provided a universal representation for them which allows generic summarization data to be further exploited for QFS. Su et al (2020), meanwhile, built a query model for paragraph selection based on the answer relevance score and iteratively summarized paragraphs to a budget.…”
Section: Related Workmentioning
confidence: 99%
“…Other work on abstractive QFS incorporated the query relevance into existing neural summarization models (Nema et al, 2017;Baumel et al, 2018). The closest work to ours was done by (Su et al, 2020) and (Xu and Lapata, 2020a,b), who leveraged an external question answering (QA) module in a pipeline framework to take into consideration the answer relevance of the generated summary.…”
Section: Introductionmentioning
confidence: 99%
“…With the advancement of neural networks (Bahdanau et al, 2014;Sutskever et al, 2014), the task of abstractive summarization has been receiving more attention (Rush et al, 2015;Chopra et al, 2016;Nallapati et al, 2016;Celikyilmaz et al, 2018;Chen and Bansal, 2018; while neural-based methods have also been developed for extractive summarization (Zhong et al, 2019b,a;Xu and Durrett, 2019;Cho et al, 2019;Zhong et al, 2020;Jia et al, 2020). Moreover, the field of text summarization has also been broadening into several subcategories, such as multi-document summarization (McKeown and Radev, 1995;Carbonell and Goldstein, 1998;Ganesan et al, 2010;, query-based summarization (Daumé III and Marcu, 2006;Otterbacher et al, 2009;Wang et al, 2016;Litvak and Vanetik, 2017;Nema et al, 2017;Baumel et al, 2018;Kulkarni et al, 2020) and dialogue summarization (Zhong et al, 2021;Chen et al, 2021a,b;Gliwa et al, 2019;Chen and Yang, 2020;. The proposed tasks, along with the datasets can also be classified by domain, such as news (Hermann et al, 2015;Narayan et al, 2018), meetings (Zhong et al, 2021;Carletta et al, 2005;Janin et al, 2003), scientifc literature (Cohan et al, 2018;Yasunaga et al, 2019), and medical records (DeYoung et al, 2021;Portet et al, 2009).…”
Section: Text Summarizationmentioning
confidence: 99%